
Edge AI for Micromobility: What It Can Do Right Now, and What It Can't
Nearhuman Team
Near Human builds intelligent safety systems for micromobility — edge AI, computer vision, and human-centered design. Based in Bristol, UK.
Last year, more children arrived at Long Island emergency rooms with e-bike injuries than in any previous year on record. The bikes had no meaningful safety systems on board. Meanwhile, the embedded AI market is projected to grow past $70 billion by 2034, with edge AI, meaning software that makes decisions directly on a device without sending data to the cloud, sitting at the centre of that growth. The gap between what the technology can now do and what's actually running on the micromobility devices injuring people is not a research gap. It's a deployment gap. And it's worth being honest about both sides of it.
The honest version of edge AI for micromobility safety is more interesting than the marketing version. The marketing version says: put a chip on a scooter, point a camera at the road, and the scooter becomes intelligent. The real version involves genuine trade-offs between processing power and battery life, between model accuracy and the cost of running it at scale, and between what a sensor can detect and what a rider can actually do with that information in under a second. Cities and investors are right to push past the headline claims. The question isn't whether edge AI can improve micromobility safety. It clearly can. The question is which specific problems it solves well today, and which ones still need more work.
Where Edge AI for Micromobility Safety Actually Works
Pedestrian detection is the clearest win. Computer vision models using architectures like YOLOv5, a well-tested approach that locates and labels objects in a video frame in real time, can now run on small embedded chips with acceptable accuracy. In daylight, on a clear road, a well-trained model can detect a pedestrian crossing ahead of a scooter at 20 mph with enough time to trigger a speed reduction or a haptic alert to the rider. That's a real capability, not a demo. Rider behaviour analysis is equally well-suited to edge processing. Detecting a sudden swerve, an abrupt stop, or a pattern of erratic acceleration doesn't require a camera at all. An accelerometer and a small trained model can flag dangerous riding in real time, log it, and send a summary to the fleet operator when the scooter next connects to a network. That means the insight doesn't depend on constant connectivity. It works on a scooter in a tunnel, in a rural area, or in a city with patchy coverage.
The harder problem is adverse conditions. A camera-based pedestrian detection model trained mostly on daytime footage will not perform the same way at dusk, in heavy rain, or when headlights cause glare on a wet surface. This isn't a flaw in edge AI specifically. It's a flaw in any vision system, including the ones being developed for self-driving cars. Research on autonomous vehicle pedestrian detection shows that false-negative rates, cases where a pedestrian is there but the system doesn't see them, climb significantly in low-light conditions even with high-end hardware. On a scooter with a smaller chip and a cheaper camera, that challenge is sharper. The responsible answer is sensor fusion: combining camera data with radar or lidar to fill the gaps that optics alone can't cover. The less responsible answer is to test in good conditions and ship.
The Battery Problem Nobody Puts in the Brochure
Here's the constraint that rarely makes it into a pitch deck. Running a computer vision model continuously on a scooter draws power. A neural network capable of classifying video at 25 frames per second might add 10 to 20 percent to the daily energy draw of a scooter that already has a limited range. For an operator running 500 scooters, that means earlier recharges, more logistics, and higher costs. The answer isn't to run the model less. It's to be smarter about when you run it. A well-designed system wakes its vision model when the scooter's speed and location suggest it's in a high-risk zone, like a shared pedestrian space or a busy junction, and scales back when the scooter is on an open road or stationary. That kind of conditional processing, sometimes called duty cycling, is where the real engineering craft lies. It's not glamorous. But it's what makes a safety system that works in a lab also work after eight hours on a Bristol street in November.
The gap between what edge AI can do and what's running on the devices injuring people today is not a research gap. It's a deployment gap, and that difference matters enormously.
Cities evaluating micromobility programmes are asking better questions than they were three years ago. Transport planners in Vienna, in San Francisco, in Bristol, have been burned by technology that worked in presentations and broke in January. The standard they're moving toward is simple: show us the system working at night, in the wet, after six months in the field, on hardware that costs less than the scooter it's attached to. That's a fair standard. It's also the standard that separates edge AI for micromobility safety as a genuine tool from edge AI as a procurement talking point. The technology is ready to do real work. The question every operator and city planner should be asking is not 'does this use AI?' The question is: 'What does this system do when the conditions get bad, and how do you know?'
Frequently Asked Questions
What is edge AI and why does it matter for micromobility safety?
Edge AI refers to software that runs directly on a device, like an e-scooter, rather than sending data to a remote server. For micromobility safety, this matters because it eliminates network delay, allowing a scooter to detect a hazard and respond in under 50 milliseconds. It also means the system works without a mobile signal.
Can edge AI detect pedestrians on an e-scooter in real time?
Yes, in good conditions. Computer vision models running on embedded chips can detect pedestrians at safe braking distances during daylight. Performance drops in rain, at dusk, and under artificial lighting. Combining camera data with other sensors like radar improves reliability in these conditions, but adds cost and complexity.
Does running edge AI drain an e-scooter's battery significantly?
It can. A continuously running vision model may add 10 to 20 percent to a scooter's daily energy draw. Well-designed systems manage this by activating the full model only in high-risk zones and using lighter processing elsewhere. This duty-cycling approach keeps power use manageable without compromising safety where it's needed most.
Nearhuman Team
10 Apr 2026