Spearheading visual navigation
ArtiSLAM combines the best of computer vision and sensor fusion:
taking autonomous systems to exactly where they need to be anywhere, at any time.
Vision-based Tracking
Visual Inertial Odometry (VIO)
Using cameras, Artisense enables autonomous systems to understand how they move locally. The proprietary direct visual SLAM, with optional tight integration of IMU, enables clients to operate in highly complex and dynamic environments. This powerful dead-reckoning system provides 6 Degrees of Freedom poses (x, y, z, roll, pitch, yaw), velocities and more, at very high frequency and practically no latency. Giving our clients the key ingredient to increasing reliability of their application and smooth control of their systems.
High frequency, 0ms latency pose output
Performs in featureless areas
Maps Optimized for Machines
ArtiSLAM Maps
Artisense maps are created with robustness and accuracy in mind. Weather, season and scenery invariant "deep features" are embedded in cm-accurate 3D maps that are used for localization. These maps last longer and require fewer updates.
For systems that require the position to be in global GNSS coordinates, (RTK-)GNSS is fused additionally to reference the map globally. This map then provides global positioning also where GNSS is not available. The entire process is fully automated and managed in the cloud.
Automated cloud map management
Long lifetime of maps
Continuous Positioning Accuracy - With or Without GNSS
Map-Based Relocalization
Map-based relocalization, i.e. matching deep features detected in real-time to a previously generated map, enables continuous positioning within cm-level accuracy.
For maps created with GNSS, relocalization allows vehicles to operate even in GNSS-denied environments such as tunnels, parking garages or urban canyons with near RTK-level accuracy.
Reliable localization
Works across vehicle platforms
Independent of GNSS
Sensor Fusion - Leaving No Room for Edge Cases
ArtiSLAM Sensor Fusion
By fusing different algorithms, artificial intelligence and sensors, Artisense offers extremely reliable and accurate SLAM that can be modularly tailored to meet specific customer requirements.
Read more on the technology and research published on the Research Publications page.
VIO + GNSS
Vision augmented GNSS significantly increases the accuracy of GNSS systems and expands the reach in GNSS-denied areas for short to mid-distances.
VIO + Relocalization
Additional fusion of map-based relocalization provides accurate positioning without any distance limits.
Wheel Odometry & INS mode
Fusing IMU and wheel odometry data in parallel to VIO creates a second redundant and independent input to further increase reliability.
LiDAR
If required, Kudan Lidar SLAM can optionally be used to create denser maps for specific use cases or as input for positioning.
Sensor fusion to curb individual sensor limitations
Nano-second time synchronization
Covariance for all systems
Looking to experience ArtiSLAM in your projects?
Other Features
Dynamic Object Masking
By omitting moving vehicles and people during VIO and mapping, ArtiSLAM only focuses on the visual input that matters.
Plug & Play ROS node
Artisense provides ROS 1 and ROS 2 support, making it quick and simple for clients to work with our system.