
Why Real-Time Micromobility Hazard Detection Fails When You Need It Most
Nearhuman Team
Near Human builds intelligent safety systems for micromobility — edge AI, computer vision, and human-centered design. Based in Bristol, UK.
A rider on a shared e-scooter hits 18 mph on a wet cycle lane at 11pm. A car door opens 4 metres ahead. From the moment the door begins to move to the moment a real-time micromobility hazard detection system should trigger a brake assist signal: you have roughly 280 milliseconds before physics makes the decision instead. Most systems shipping today aren't processing fast enough to act in that window. Not because the engineers were careless. Because they built for the demo, not the door.
The micromobility industry is currently navigating a regulatory squeeze from multiple directions. Florida is moving a new e-bike and scooter safety bill. Vienna is revising how it handles liability for mobility device incidents. Campuses are banning scooters outright after accident spikes. Every one of these responses treats safety as a policy problem. That framing is understandable and almost entirely wrong. The gap isn't in the ordinances. The gap is in what the hardware on the device actually does in the 300 milliseconds before contact, in the rain, at low light, on a road surface that hasn't been resurfaced since 2009. Policy can mandate safety systems. It cannot mandate that those systems work.
The Latency Problem That Benchmark Sheets Don't Show
Most computer vision pipelines for micromobility are validated in controlled conditions: good light, clean lenses, consistent frame rates. In those conditions, a capable edge AI model running on a mid-tier embedded processor can achieve detection latency under 100 milliseconds and maintain 28 to 35 frames per second. Ship that same system into a fleet operating in Bristol in November, and the numbers shift in ways that aren't linear. Rain on the lens drops effective detection range. Cold degrades battery output to the compute module, which throttles clock speed, which adds 40 to 60 milliseconds of latency per inference cycle. A pedestrian stepping off a kerb at 1.2 metres per second has moved 7 centimetres in that extra window. That is the difference between a system that responds and a system that files a post-incident report. The embedded AI market is expanding fast, with analysts projecting significant growth through 2034, but market size tells you nothing about whether any given deployment actually performs under operational stress. Most don't get tested until someone is already hurt.
The honest counter-argument is that on-device inference has improved dramatically. That's true. ZF's dual-lens camera work for ADAS systems, and the broader progress in edge AI hardware over the past three years, has brought real capability down to form factors and price points that would have seemed unrealistic in 2021. But ADAS development cycles run four to seven years, include extensive validation across adverse conditions, and are deployed in vehicles that cost forty thousand pounds. A shared scooter operator running a 600-unit fleet in a mid-size city cannot absorb a sensor module that costs more than the scooter. The engineering challenge isn't proving that accurate, low-latency detection is physically possible. It's proving it's possible at a unit cost under 80 dollars, on a device that will be dropped, rained on, and ridden by someone who has never read a safety brief. Those are different problems and the industry keeps conflating them.
What Fleet Operators Are Actually Buying When They Buy 'AI Safety'
Honda's investment in the UCR micromobility safety program and the wave of municipal ordinance updates across US cities signal something real: the pressure on operators to demonstrate safety capability is now commercial, not just regulatory. Insurers are starting to price fleet risk based on what on-device systems actually log, not what the vendor's spec sheet claims. That shift changes the calculus for operators significantly. A system that detects near-miss events accurately and consistently produces data that reduces insurance premiums, satisfies city permit requirements, and creates defensible incident records. A system that performs well in sunshine and degrades silently in adverse conditions produces the opposite: a false sense of coverage, liability exposure that isn't priced correctly, and eventually a headline. The difference between those two outcomes lives entirely in how the model was trained, what hardware it runs on, and whether the system was tested at 3am on a potholed road or only in a car park in June. Operators buying safety AI right now are largely unable to tell which one they're getting. The vendors selling it are often unable to tell either.
An edge model that processes 40 frames per second in sunshine and 12 in rain isn't a safety system. It's a fair-weather guarantee with a safety label on it.
Thermal cameras combined with AI inference have shown real promise for low-light pedestrian detection, and the research is credible enough to take seriously. But thermal adds cost, requires different training data, and introduces its own failure modes in high-ambient-temperature environments. There is no single sensor modality that closes every gap. The systems that will actually reduce injury rates in deployed fleets are the ones built around that uncomfortable truth: not optimised for a single condition, but stress-tested against the conditions where riders actually get hurt. Cities are finally asking the right questions about micromobility safety. The industry needs to stop answering those questions with slide decks and start answering them with operational data from real fleets, in real weather, across a full calendar year. Until that data exists at scale, every safety claim, including ours, should be read with appropriate skepticism. That's not a comfortable position to publish. It's the only honest one.
Frequently Asked Questions
What is the minimum acceptable latency for real-time micromobility hazard detection?
For a rider travelling at 18 mph, a hazard detection system needs to complete inference and trigger a response signal within approximately 200 to 300 milliseconds to allow meaningful intervention before contact. Systems exceeding 150 milliseconds of end-to-end latency under adverse conditions, including rain, low light, and thermal throttling, cannot reliably act within that window. Benchmark figures quoted under ideal conditions are not a reliable guide to operational performance.
Why does edge AI for scooter safety perform worse in bad weather?
Rain and cold affect edge AI scooter safety systems through two distinct mechanisms. Physically, water on the lens reduces effective detection range and introduces visual noise that degrades model confidence scores. Thermally, cold temperatures reduce battery output to the compute module, causing clock speed throttling that adds 40 to 60 milliseconds per inference cycle. Neither effect appears in lab benchmarks but both are consistent and measurable in field deployments.
How are fleet operators using on-device inference data for insurance and compliance?
Insurers are beginning to price shared micromobility fleet risk based on verified on-device event logs, including near-miss frequency, harsh braking events, and detected hazard encounters. Operators with accurate, consistent incident data can demonstrate lower risk profiles and negotiate better premiums. However, this only works if the underlying detection system performs reliably across all operating conditions, as silent degradation in adverse weather produces incomplete logs that misrepresent actual fleet risk.
Nearhuman Team
8 Apr 2026