How Edge AI for Micromobility Safety Actually Works in the Real World
Insight11 Apr 20265 min read

How Edge AI for Micromobility Safety Actually Works in the Real World

N

Nearhuman Team

Near Human builds intelligent safety systems for micromobility — edge AI, computer vision, and human-centered design. Based in Bristol, UK.

An edge AI model that runs at 40 frames per second in bright sunlight and drops to 12 frames per second in heavy rain isn't a safety system. It's a fair-weather guarantee. That distinction, between a system that works in a lab and one that works in Bristol in November, is where most edge AI micromobility safety projects quietly fall apart.

The honest version of the story is more complicated than most vendor demos suggest. Edge AI, software that runs on a chip attached to the vehicle rather than on a remote server, offers a real and measurable advantage in response time. But putting a capable AI model on a small, battery-powered device that survives rain, vibration, heat, and the occasional 20 mph impact introduces constraints that change almost every design decision. News12 recently reported a rise in e-bike injuries among young riders in Long Island. Campus safety offices are banning shared scooters after injury spikes. City councils from Vienna to Texas are updating their ordinances. Everyone agrees there's a problem. Far fewer people agree on what a technically honest solution looks like.

The Hardware Gap Nobody Shows You in the Brochure

Running a computer vision model on a vehicle means choosing a processor that balances three things that all pull against each other: processing speed, power draw, and physical size. A chip powerful enough to run a full object-detection model at 30 frames per second can consume 8 to 15 watts continuously. A scooter battery isn't designed to feed that load on top of the motor and lights. The emerging answer is a class of chips called neural processing units, or NPUs, which are processors built specifically to run AI models efficiently rather than doing general computing. New embedded hardware like the Cincoze DX-1300 category of devices combines CPU, GPU, and NPU in one unit, which cuts both power draw and latency. At high volume, NPU-enabled modules are approaching a price point where fitting one to every vehicle in a 1,000-unit fleet becomes a real business decision, not a research project.

The harder constraint isn't power or cost. It's the model itself. A detection model trained mostly on car dashcam footage performs poorly at handlebar height, where the camera angle is lower, the vibration is higher, and the relevant hazards (kerbs, pedestrian feet, low bollards) are different from what a car driver faces. Getting a model that generalises across rain, glare, and motion blur requires training data from those conditions specifically, not just augmented versions of dry, well-lit images. Edge Impulse and similar on-device ML platforms have made it far easier to train and deploy compact models. But there is no shortcut around the data collection problem. The model is only as good as the conditions it has seen.

What City Planners and Fleet Operators Should Actually Ask

If you run a fleet or advise a city on micromobility contracts, the right questions are not 'does it use AI?' They are: what is the detection accuracy at dusk, in rain, after 18 months of physical wear? What happens when the model is wrong, and how often is it wrong? A false positive rate that triggers an alert every ten minutes will train riders to ignore every alert. A false negative rate that misses one in five pedestrians isn't a safety system at all. These are the numbers that determine whether a system is worth the hardware cost and the contract clause. Any supplier who can't give you these figures across real operating conditions is answering a different question than the one you asked.

The good news is that the underlying technology is improving on a fast curve. The YOLOv5 family of object-detection models, a class of fast, compact AI models designed to spot objects in video in real time, can now run on embedded hardware at useful frame rates for a fraction of the cost it took two years ago. Pedestrian detection research from the autonomous vehicle space is feeding directly into micromobility applications. The field is moving. What's lagging is honest evaluation: third-party testing of these systems in the conditions they'll actually face, not the conditions that make the demo look good. Until that infrastructure exists, the gap between a marketed safety feature and a tested one will stay wide.

An edge model that processes 40 frames per second in sunshine and 12 in rain isn't a safety system. It's a fair-weather guarantee.

Micromobility is at the same point the automotive safety industry was in the early days of ABS: the technology is proven in controlled conditions, the deployment is patchy, and the people who'll be protected by it are already on the road. The cities and operators who ask hard questions now, before they sign contracts, before they deploy hardware, will build safety systems that actually work. The ones who buy a brochure will learn the difference the hard way. The riders don't get to choose which kind of fleet they pick up.

Frequently Asked Questions

What is edge AI and why does it matter for micromobility safety?

Edge AI means running an AI model directly on a device, like a scooter or e-bike, rather than sending data to a remote server. For safety applications, this matters because the round-trip delay to a cloud server is 200 to 400 milliseconds, which is too slow to warn a rider before a collision. On-device processing can respond in under 50 milliseconds.

What are the main technical challenges of putting computer vision on an e-scooter?

The main challenges are power consumption, physical durability, and model performance in poor conditions. A vision chip powerful enough to run real-time object detection can draw 8 to 15 watts, which competes with the motor and battery. Models also need to be trained on data from the actual conditions they'll face, including rain, low light, and the vibration and camera angle specific to a scooter handlebars.

How should fleet operators evaluate edge AI safety systems for their scooters?

Operators should ask suppliers for detection accuracy figures specifically in low-light and wet conditions, not just ideal environments. They should also ask for false positive and false negative rates from real-world testing, not lab benchmarks. A high false positive rate erodes rider trust. A high false negative rate means the system misses the hazards it was built to catch.

N

Nearhuman Team

11 Apr 2026