FINGER COUNTER USING OPENCV


Computer Vision is playing a significant role in self-driving cars, robotics in addition, and in photo correction apps. Computer vision is a process by which we are able to understand the photographs and videos how they're stored and how we are able to manipulate and retrieve data from them. Computer Vision is the base or mostly used for AI.    
                         


This project could be a good start to induce you into the field of computer vision. it's fairly simple and great for prime school projects. it's done using python programming language and uses OpenCV module, OpenCV is an open-source library for computer vision, machine learning, and image processing and now it plays a significant role in real-time processing which is incredibly important in today’s systems. By using it, one can process images and videos to spot objects, faces, or perhaps handwriting of a personality. When it's integrated with various libraries, like NumPy, python is capable of processing the OpenCV array structure for analysis to spot image patterns and their various features we use vector space and perform mathematical operations on these features. The program also uses a custom hand tracking module which is created using the MediaPipe module.MediaPipe Hands uses a ML pipeline comprising of various models cooperating. A palm recognition model that works on the complete picture and returns a situated hand jumping box. A hand milestone model that works on the trimmed picture locale characterized by the palm indicator and returns high-devotion 3D hand keypoints. Giving the precisely edited hand picture to the hand milestone model radically lessens the need for information increase (for example revolutions, interpretation, and scale) and on change of mind permits the organization to devote the overwhelming majority of its ability towards arranging expectation precision. The pipeline is executed as a MediaPipe chart that utilizes a hand milestone following subgraph from the hand milestone module and renders utilizing a committed hand renderer subgraph. The hand milestone following subgraph inside utilizes a hand milestone subgraph from an identical module and a palm discovery subgraph from the palm identification module. After the palm detection over the full image the subsequent hand landmark model performs precise keypoint localization of 21 3D hand-knuckle coordinates inside the detected hand regions via regression, that's direct coordinate prediction. The model learns an identical internal hand pose representation and is powerful even to partially visible hands and self-occlusions.




The hand tracking module is made so we are able to customize it for other projects (reusability is vital in programming). we are able to customize parameters like maximum number of hands to be detected and tracked, tracing the points, etc.

                                                    


This program is programmed to detect and track the right hand only, of course, this could be tweaked by changing the parameters within the program.
                                    

The full code and also the necessary files are available in my  GitHub repository  

If any queries, be happy to type within the comment section below.

Happy programming :)



No comments:

If you have any doubts, Please let me know

Powered by Blogger.