After Apple’s introduction of ARKit 2, we have been consistently working behind to create shared-AR experiences. Our goal is to improve the utility of mobile using AR experiences.
This project shows how to use CoreML and Vision with a pre-trained deep learning SSD (Single Shot MultiBox Detector) model. There are many variations of SSD. The one we’re going to use is MobileNetV2 as the backbone this model also has separable convolutions for the SSD layers, also known as SSDLite. This app can find the locations of several different types of objects in the image. The detections are described by bounding boxes, and for each bounding box, the model also predicts a class.
Core ML and Vision object classifier with a lightweight trained model. The model is trained and tested with Create ML straight from Xcode Playgrounds with the dataset I provided.
There are two phases of this iOS application, one which has the implementation of Resnet Model (CoreML Model) and also Hand State Detection Model which i made using Custom Vision.
Aplikacja na iOS służąca do rozpoznawania znaków drogowych z ograniczeniem prędkości. | An iOS app performing speed limit sign recognition.
Do you know if this works with Yolo v4 or V5?