Deep learning for Pose Estimation and Action Detection (closed)

Internship title

Deep Learning for Pose Estimation and Action Detection

Objective

Live audience interaction within art installations has been a long time interest for artists and creators. Using cameras and machine learning methods, the Metalab is developing a tool to add an interactive component to artistic experiences proposed by creators. This interactive component is based on detection of the audience's position and key points of the body, allowing real-time interaction with visual and sound elements.

The objective of this internship is to enhance the Metalab's interactive proposition by adding action detection that will allow user actions to control, influence, and transform visual and sound elements; for example, stimulating particle movement or modifying the shape or position of an object through physical gestures.

Tasks

With the support of the Metalab team:

  • Exploration of pose estimation and action detection techniques for our use case

  • Integration of tools based on artificial intelligence and machine learning algorithms

  • Work with video streams from RGB cameras (webcams, traditional cameras, as well as industrial cameras)

  • Participate in the lifecycle of the lab: scrums, code reviews, etc.

  • Documentation of the work and ensuring its reproducibility

Work environment

  • Python

  • Tensorflow, OpenCV

  • JIRA / Confluence

  • GitLab

  • Linux, Free and Open Source Software (FOSS)