Project types

Each project in Keylabs can have a separate and customizable annotation description with its own objects/attributes and dataset to process.

The project types are:

  1. Picture/Video

  2. Point cloud/3D

  3. Picture/Video Merge

  4. Point Cloud/3D Merge

  5. External

Picture/Video

The default project type is "Picture/Video" suitable for annotating photos and videos. This type is most often used for

  1. The process of training computer vision models greatly relies on photo and video annotations. These annotations are crucial in teaching artificial intelligence models how to recognize and classify objects in images and videos. By understanding the annotated data, the models are able to learn to accurately identify various objects along with their unique characteristics.

  2. Annotations in photos and videos are also commonly used in the development and training of autonomous navigation systems, such as those found in autonomous cars and robots. These annotations help to identify obstacles, road signs, pedestrians, and other objects on the road, which in turn allows the autonomous system to make informed decisions and control its movements.

  3. Annotations are also used to analyze content in videos and photos. For example, they can be used to identify faces of people in photos and videos, classify objects in images, highlight key scenes or events in videos, and track the movement of objects in a video stream.

  4. In the medical field, annotations are crucial for processing medical images such as X-rays, MRIs, and ultrasounds. They help doctors and researchers identify and locate pathological changes, label organs and structures, and analyze medical data for diagnosis and treatment.

  5. The process of creating markup data for training involves annotating photos and videos. This allows for the addition of tags to the data, which can be used in various fields such as machine learning, research, and augmented reality application development.

With annotations, image and video data can be enriched with labels, which greatly expands their capabilities in the fields of artificial intelligence and computer vision.

Point cloud/3D

When it comes to annotating 3D models and scenes, the possibilities are endless. 3D annotations offer a wide range of applications, including computer vision development and training, virtual reality, robotics and autonomous systems, and medical image annotation.

With 3D annotations, you can add labels and semantic information to 3D models of objects and scenes, making it easier to train deep learning-based computer vision models to recognize and classify objects and their features in 3D space.

Moreover, 3D annotations are used in virtual reality development to define object boundaries, contours, and to place virtual objects and elements in the real world. Additionally, 3D annotations can be used to create 3D models of the environment and its animation.

Robots and autonomous systems also benefit from 3D annotations. They help determine the position and orientation of objects in 3D space, and assist robots in understanding the environment and making appropriate decisions.

Furthermore, 3D annotations are essential in the medical field for indicating anatomical structures and pathological changes on 3D models of organs or tissues. This allows doctors and researchers to better understand and visualize complex 3D data.

Processing and analyzing LiDAR (Light Detection and Ranging) data involves complex algorithms and software.

LiDAR technology involves the use of laser pulses to collect precise three-dimensional data about the earth's surface or objects. This data is then processed and analyzed through LiDAR data processing. During the process, the laser pulses emitted by the LiDAR device are received by a detector which records the time and reflected energy. This information is used to create a three-dimensional point cloud that accurately represents the surface of the objects and the environment. LiDAR data processing has various applications such as in cartography, geographic information systems, autonomous navigation, forestry, architecture, aerospace and other fields.

3D annotations provide a set of tools for working with 3D data and models, offering vast opportunities in computer vision, virtual reality, robotics, and medicine.

Merge project

The Merge project is a feature in the annotation editor that enables you to combine multiple segments or fragments of a file into a single, seamless segment. This useful feature can be applied in various contexts and domains where you need to merge separated parts into a single element.

Our annotation editor supports different file types, such as videos, images, and 3D images. The merge feature allows us to combine previously separated segments or individual parts into one common segment. By splitting files into segments, we can simplify further processing of files, reducing the hours of work required for your project.

Last updated