AUS Laboratory

The aim of the AUS Lab is to develop novel methods for complex problems for autonomous driving.  The required working hours are 120 (AUS Lab I. ) or 180 (AUS Lab II). The projects are supervised by professors and/or PhD candidates. 

At the end, a project report should be written, and a final workshop will be organized in which each participant will present their work in 10 minutes. The grade is given in the last week of the exam’s period. 

For complex problems, the laboratory work can be continued as an MSc thesis.

We offer the students several topics, but own ideas can also be implemented.

The list of topics for the 2024 fall semester is as follows:

  1. Human Detection and Localization using Ground-based Microphone Array

Project Definition : Objective: Develop a system to detect human intruders in designated private zones using audio-based technologies.

Approach: Detection: Utilize a microphone array or multiple arrays to capture audio signatures specific to human walking patterns. The primary goal is to classify these patterns effectively to distinguish human presence from other sounds in the environment. Localization: Implement triangulation techniques to ascertain the position of the detected human within the private zone. Initially, the system will focus on localizing one object at a time.

Simulation and Development: Before deploying actual hardware, the system will be simulated using online tools and available datasets. This phase aims to refine the classification and localization algorithms under controlled conditions.

Field Testing: Upon successful simulation, the project will progress to field testing with hardware provided by the university. This phase will test the system’s efficacy in real-world scenarios and gather data to further improve its accuracy and reliability.

2. Data visualization and simulation software for testing calibration algorithms

The goal of the lab work is to create a data visualization and data simulation software that provides an easy way to test and develop the calibration algorithms of the GCVG research group. The application should be developed in Unity. 

Basic Task: 

The application should be capable of simulating 2D/3D LiDAR and camera systems, allowing users to set their relative positions in space and intrinsic parameters. In addition to sensor simulation, the program should also be able to place simple target objects commonly used in calibration algorithms, such as planes, chessboards, cylinders, and spheres. The program should provide options for texturing objects and loading more complex meshes. The application should also allow the saving of simulated sensor data. 

In addition to simulation, the developed application must be able to simultaneously read and display LiDAR point clouds from various file formats (.PLY, .PCD, .XYZ) and camera images. The program should provide the capability to specify the intrinsic and extrinsic parameters of the sensors and use these to color the LiDAR point cloud based on the images, as well as display the points of the point cloud on the images. 

Bonus: 

The application should provide the ability to easily integrate calibration algorithms using the aforementioned data through interfaces or abstract class implementations. The program should give users implementing the algorithm access to the data, as well as the external and internal parameters of the sensors (if available). The program should be able to run correctly implemented algorithms and display and evaluate the results returned by the algorithms. For ease of implementation, this could be a separate application or library.  

3. Depth Anything: depth estimation by a pretrained network 

The goal of this project is to apply a pre-trained network, called ‘Depth Anything’ for the images of ELTECar. 

The results of DepthAnything are very spectacular, however, the depth values are not always correct. Moreover, the obtained depth is defined up to an unknown scale. 

The aim for this semester is to define the scale between estimated and real depth maps based on LiDAR measurements and/or stereo vision /planar omnidirectional vision. 

4. Visualization of data recorded by ELTECar data 

ELTECar is the vehicle of the Faculty of Informatics, mounted with several different sensors: 

  • Digital cameras with normal and fisheye lenses 
  • 3D Lidar 
  • GPS device with RTK correction, reaching at most 3cm precision for localization 
  • IMU: accelerometer, magnetometer 

The aim of the project is to generate videos that can visualize different sensor data as spectacularly as possible.  

Example visualization. Top: four cameras stitched to each other. Bottom: LiDAR points drawn to OpenStreetMap, ground points visualized by red. 

5. Parking space detection on birds-eye view images

(industrial projects, details are hidden – they will be given in the kick-off meeting)

6. State Initialization Methods for Localization algorithms

(industrial projects, details are hidden – they will be given in the kick-off meeting)