04/11/2025
Barcelona, Spain – November 2025
At the upcoming Smart City World Congress 2025, Neural Labs will officially present its new AID Self Learning Module, an advanced AI functionality within the Neural Orchestr[ai]tor platform. Developed with the support of NVIDIA, the solution brings accelerated computing to large-scale video analytics, enabling cities to monitor thousands of cameras in real time with adaptive, autonomous intelligence.
“Cities are increasingly deploying cameras for traffic and safety management, but most remain passive. Our Self-Learning Module turns these cameras into proactive sensors that detect incidents automatically,” said Juan Silva, Regional Sales Manager at Neural Labs. “With NVIDIA accelerated computing, Neural Orchestr[ai]tor can process dozens of video streams simultaneously, providing instant insights where they’re needed most.”
Urban and highway networks often rely on hundreds or thousands of installed cameras operating in passive mode. Human operators cannot feasibly monitor all streams in real time, leading to delayed or missed incident detection and underused infrastructure.
The AID Self-Learning Module addresses this challenge by automatically learning each camera’s environment and generating its own detection rules. Through Deep Learning & machine learning, it identifies abnormal situations such as stopped vehicles, traffic congestion, pedestrians or animals on the road, smoke, or wrong-way driving — without requiring any manual configuration.
In collaboration with NVIDIA, Neural Labs has optimized the Self-Learning Module to take full advantage of GPU-accelerated deep learning inference, delivering:
• Real-time performance – simultaneous analysis of dozens of video streams with minimal latency. • Scalability – easy expansion of monitored cameras without performance loss.
• Energy efficiency – greater performance per watt compared to CPU-only setups.
• Effortless setup – automatic configuration without defining zones or events.
• Continuous learning – adapts dynamically to environmental or camera position changes. • Broad compatibility – works with any RTSP camera (bullet, dome, or PTZ).
• Scalable centralized processing – AI servers available for 2 to 50 camera streams.
• Flexible licensing – floating licenses allow dynamic redistribution among large camera pools. A Smarter, More Proactive Approach to Urban Video Analytics
With the AID Self-Learning Module, Neural Labs and NVIDIA empower cities to unlock the full potential of their video networks — moving from passive monitoring to intelligent, adaptive analytics that enhance mobility, safety, and resource efficiency.
Live demonstrations and technical presentations will take place at the Neural Labs booth during Smart City World Congress 2025 in Barcelona. For meeting requests or further information, please contact info@neurallabs.net or visit www.neurallabs.net.
Neural Labs is a leading provider of AI-based video analytics solutions for smart mobility, ITS, and enforcement systems. Its products are deployed worldwide, supporting authorities and system integrators in enhancing urban safety, traffic management, and environmental compliance through advanced computer vision and machine learning technologies.
We are excited to announce that Yolanda Guerrero joins the Neural Labs team, where she will be our Key Account Manager Mexico, strengthening our presence...
Neural Labs, specialist in solutions based on video analytics and AI, launches new all-in-one camera for ITS (Intelligent Transport Systems): Neural Ghost AI Single. This...
Neural Labs' camera-embedded license plate recognition solution, Neural Edge, offers new functionalities and compatibility with AXIS (ARTPEC-8) and HANWHA cameras. Neural Edge analyzes video of...
We are pleased to welcome to the Neural Labs team, specialist in AI-based video analytics solutions for Smart Cities and ITS, to Ricard Montserrat to...
Call us to (+34) 93 820 56 94 or fill out the form:
Our website uses cookies to provide better service.