Page 1
This document is provided under a license to use only with all other rights, including ownership rights, being retained by eInfochips. This file may not be distributed, copied, or reproduced in any manner, electronic or otherwise, without the express written consent of eInfochips...
DOCUMENT DETAILS Document History Author Reviewer Approver Date Date Date Version Name (DD-MM- Name (DD-MM- Name (DD-MM- YYYY) YYYY) YYYY) Anil Patel 19-Apr-2019 Prajose 19-Apr- Bhavin 19-Apr- John 2019 Patel 2019 Anil Patel 15-Oct-2019 Prajose 15-Oct- Bhavin 15-Oct- John 2019 Patel 2019 Version...
Demos to run on iMX8ML_RD AIML firmware. About the System • This system contains iMX8X reference design with multiple interfaces. This is used for Machine learning experience. Figure 1: iMX8XML RD Pre-requisite • x86 host system having Linux Ubuntu 16.04 LTS installed •...
ML DEMOS BACKGROUND To demonstrate board’s capabilities for Machine Learning demos, we implemented few Audio and video related ML demos. These demos mainly depend on OpenCV, Tensorflow, Caffe, ARM NN and some python packages. All video ML demos required video source (webcam or D3 Mezzanine based OV5640 camera) to capture live stream and perform some action on it.
Figure 3: SD card partitions overview after flashing firmware • As shown in above figure, in SD card partitions, we can see unused partition (10 GB) at last. We can utilized it. • Now Click on “+” sign to create new partition. •...
Figure 4: Create New EXT4 partition • It will take few times and create partition and we can able to mount that partition. (See beow Figure for reference.)
Figure 5: SD card partition after creating new one • Copy ARROW DEMOS on this partition. If you are unable to do that then kindly unmount and mount again. • After successful copy, boot our AIML board with this sdcard. •...
Run Setup All required Python packages for ML demos are already installed in AIML firmware image. Our demos run using Python3 so we have added package for Python3 and not for python. We have also provide support for Python and Python3 PIP Package. Through which we can add or remove any python package and remove dependencies for rebuilding firmware image each times.
RUNNING ML DEMOS To run ML Demos we have created run_ml_demo.sh shell script. This script ask for user preferences like demo type, camera types, camera node entry, desired MIC etc and based on that run ML demos. In this section, we discuss how to run each demo. Details description of Demo is in next section. 1.
Page 13
Run /run/media/mmcblk1p3/ARROW_DEMOS/run_ml_demos.sh script and select option 1. See below full log to run demo, where user input is in BOLD RED fonts. # sh /run/media/mmcblk1p3/ARROW_DEMOS/run_ml_demos.sh ######## Welcome to ML Demos [AI Corowd Count/Object detection/Face Recognition/Speech Recognition/Arm NN] ########## Prerequisite: Have you run <setup_ml_demo.sh>? Press: (y/n) ****** Script Started ******* Setup is already completed.
Page 14
D3 Mazzanine Camera is used for demo [ 130.962684] random: crng init done [ 130.966100] random: 7 urandom warning(s) missed due to ratelimiting ImportError: No module named 'numpy.core._multiarray_umath' ImportError: No module named 'numpy.core._multiarray_umath' NN> using tensorflow version: 1.3999999999999999 NN> using CPU-only (NO CUDA) with tensorflow ##### LOADING TENSORFLOW GRAPH ##### 160x120 used for live mode as these typically are close ##### 640x480 used for pre-captured large images of crowd as these are far...
2. Object Detection Demo In this Demo, we detect few objects like aeroplane, bicycle, bus, car, cat, cow, dog, horse, motorbike, person, sheep, train (objects necessary for self-driving cars.) Here we have two version of Object detection. Both demo use same caffe based object detection model so accuracy remain same for both the demos.
Figure 9: Run Object Detection Demo See below full log to run demo, where user input is in BOLD RED fonts. # sh /run/media/mmcblk1p3/ARROW_DEMOS/run_ml_demos.sh ######## Welcome to ML Demos [AI Corowd Count/Object detection/Face Recognition/Speech Recognition/Arm NN] ######### Prerequisite: Have you run <setup_ml_demo.sh>? Press: (y/n) Choose the option from following Press 1: AI Crowd Count...
Page 18
Welcome to object Detection This model detect aeroplane, bicycle, bus, car, cat, cow, dog, horse, motorbike, person, sheep, train (objects necess ary for self-driving) Please choose type of camera used in demo Press 1: For USB Web Cam Press 2: For D3 Mazzanine Camera USB Web Camera is used for demo Enter Camera device node entry e.g.
Using Wayland-EGL Using the 'xdg-shell-v6' shell integration Total Elapsed time: 15.52 Approx. FPS: 0.90 Exiting Demo... Now in this demo, we need to provide object in front of camera to detect it. Person is the best real-time object for detection. Most of the object is easily available outside environment. However, to test model we don’t need actual object.
Figure 11: Sample Input Image for Object Detection Figure 12: Sample Object detection output As shown in above figure, in output image, dog is not detected, as it is not in good angle and exposure. Due to that model detect that with very low percentage and we ignore that due to low confidence.
3. Face Recognition Demo In face recognition demo, we first detect face from given image or video frame. After detecting face, we apply our pre-trained model on it and try to recognize face. In this demo, we have used few ML techniques and OpenCV face recognition models. We also used “face recognition”...
If user want to re-train model again without capturing new training data, then we must not provide label and simply run training process. Pre-requisite: • Webcam or D3 Mezzanine camera • USB Mouse • USB Keyboard • Use USB Hub (if have USB webcam) because we have only two USB ports •...
Page 23
****** Script Started ******* Setup is already completed. No need to do anything. Exiting... Choose the option from following Press 1: AI Crowd Count Press 2: Object Detection Press 3: Face Recognition Select: (1/2/3/4/5) Welcome to Face Recognition Demo This is a demo application using Python modules to be run on embedded devices for recognition of faces.
Page 24
Because we randomly sample only few frames from camera and applied same face recognition on the rest of frames. As we only sample few frames, here output is slow. You can get correct result around after 2- 3 sec. Press 2: For Slow Face Recognition. Here we applied face recognition on each camera frame and display output.
Figure 14: Face Recognition Output If user want to re-train model with new person/label then he/she need to select train model with label as shown in below logs: Please choose mode of operation for demo Press 1: Test Model Press 2: Train Model Face recognition Training ...
4. Speech Recognition Demo This audio demo is use case of on-board DMIC. Here we have two demos for testing MIC. In First Demo, we use our custom trained model for few selected keywords like “yes no up down left right on off stop go”. In this demo we got around 80-85% accuracy as it is bit hard for audio to identify correct keyword compare to image where we easily get accuracy more than 90% by CNN (convolutional neural network).
Page 29
Press 2: Object Detection Press 3: Face Recognition Press 4: Speech Recognition Press 5: ARM NN Demo Select: (1/2/3/4/5) Welcome to Speech Recognition Demo This is a demo application using Python modules and Tensorflow to be run on embedded devices for recognition of spoken words.
Page 30
ALSA lib ../../alsa-lib-1.1.5/src/confmisc.c:1281:(snd_func_refer) Unable to find definition 'cards.imx- spdif.pcm.front.0:CARD=0' ALSA lib ../../alsa-lib-1.1.5/src/conf.c:4554:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory ALSA lib ../../alsa-lib-1.1.5/src/conf.c:5033:(snd_config_expand) Evaluate error: No such file or directory ALSA lib ../../../alsa-lib-1.1.5/src/pcm/pcm.c:2552:(snd_pcm_open_noupdate) Unknown PCM front ALSA lib ../../../alsa-lib-1.1.5/src/pcm/pcm.c:2552:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rear ALSA lib ../../../alsa-lib-1.1.5/src/pcm/pcm.c:2552:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.center_lfe...
Page 31
ALSA lib ../../../alsa-lib-1.1.5/src/pcm/pcm.c:2552:(snd_pcm_open_noupdate) Unknown PCM surround40 ALSA lib ../../alsa-lib-1.1.5/src/confmisc.c:1281:(snd_func_refer) Unable to find definition 'cards.imx- spdif.pcm.surround51.0:CARD=0' ALSA lib ../../alsa-lib-1.1.5/src/conf.c:4554:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory ALSA lib ../../alsa-lib-1.1.5/src/conf.c:5033:(snd_config_expand) Evaluate error: No such file or directory ALSA lib ../../../alsa-lib-1.1.5/src/pcm/pcm.c:2552:(snd_pcm_open_noupdate) Unknown PCM surround41 ALSA lib ../../alsa-lib-1.1.5/src/confmisc.c:1281:(snd_func_refer) Unable to find definition 'cards.imx- spdif.pcm.surround51.0:CARD=0'...
Page 32
ALSA lib ../../alsa-lib-1.1.5/src/confmisc.c:1281:(snd_func_refer) Unable to find definition 'cards.imx- spdif.pcm.iec958.0:CARD=0,AES0=4,AES1=130,AES2=0,AES3=2' ALSA lib ../../alsa-lib-1.1.5/src/conf.c:4554:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory ALSA lib ../../alsa-lib-1.1.5/src/conf.c:5033:(snd_config_expand) Evaluate error: No such file or directory ALSA lib ../../../alsa-lib-1.1.5/src/pcm/pcm.c:2552:(snd_pcm_open_noupdate) Unknown PCM iec958 ALSA lib ../../alsa-lib-1.1.5/src/confmisc.c:1281:(snd_func_refer) Unable to find definition 'cards.imx- spdif.pcm.iec958.0:CARD=0,AES0=4,AES1=130,AES2=0,AES3=2' ALSA lib ../../alsa-lib-1.1.5/src/conf.c:4554:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory...
Page 33
ALSA lib ../../alsa-lib-1.1.5/src/conf.c:4554:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory ALSA lib ../../alsa-lib-1.1.5/src/conf.c:5022:(snd_config_expand) Args evaluate error: No such file or directory ALSA lib ../../../alsa-lib-1.1.5/src/pcm/pcm.c:2552:(snd_pcm_open_noupdate) Unknown PCM bluealsa ALSA lib ../../alsa-lib-1.1.5/src/confmisc.c:1281:(snd_func_refer) Unable to find definition 'defaults.bluealsa.device' ALSA lib ../../alsa-lib-1.1.5/src/conf.c:4554:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory ALSA lib ../../alsa-lib-1.1.5/src/conf.c:5022:(snd_config_expand) Args evaluate error: No such file or directory...
Page 34
ALSA lib ../../../alsa-lib-1.1.5/src/pcm/pcm_dsnoop.c:575:(snd_pcm_dsnoop_open) The dsnoop plugin supports only capture stream ALSA lib ../../../alsa-lib-1.1.5/src/pcm/pcm_dsnoop.c:575:(snd_pcm_dsnoop_open) The dsnoop plugin supports only capture stream [16848.392805] fsl-esai-dai 59010000.esai: ASoC: can't set 59010000.esai hw params: -22 ALSA lib ../../../alsa-lib-1.1.5/src/pcm/pcm_direct.c:1271:(snd1_pcm_direct_initialize_slave) unable to inst[a1l6l8 4 8w. 4p0a9r9a0m1s] fAsLlS-Ae slaiib- d.a.i/ .5.9/.01.0/0a0l0s.ae-slaiib:- 1t.1h.e 5r/astrico/ picsm /opuctm_ dofs nroaonpg.ec :(614 9~:_(s16n)d pcm_dsnoop_open) unable to initialize slave ALSA lib ../../../alsa-lib-1.1.5/src/pcm/pcm_dsnoop.c:575:(snd_pcm_dsnoop_open) The dsnoop...
Page 35
dmix_48000 : 4 dmix_44100 : 5 dmix_32000 : 6 dmix_16000 : 7 dmix_8000 : 8 dsnoop_48000 : 9 dsnoop_32000 : 10 dsnoop_16000 : 11 asymed : 12 dsp0 : 13 dmix : 14 default : 15 Which input (audio) device you want? Please provide index value (in number) : You selected audio device : Expression 'alsa_snd_pcm_hw_params_set_buffer_size_near( pcm, hwParams, &alsaBufferFrames )' failed in '../portaudio/src/hostapi/alsa/pa_linux_alsa.c', line: 922...
Page 36
stop (prediction score = 22.94) go (prediction score = 48.45) left (prediction score = 44.40) yes (prediction score = 26.72) right (prediction score = 97.46) down (prediction score = 27.49) stop (prediction score = 20.01) yes (prediction score = 93.53) no (prediction score = 50.74) go (prediction score = 51.61) up (prediction score = 51.55)
Page 37
^CExiting Demo... Exiting Demo... root@imx8qxpaiml:~# Here As we seen in above output, we got lots of error and warning regarding ALSA Lib. This is because ALSA lib try to configure capture only device to playback and vice versa. This errors want affect our actual behavior so kindly ignore that. Also, we need to select Audio device 1 which is “imx-audio-sph0645: - (hw:1,0)”...
Page 38
Speech Recognition thinks you said : Anil open Wikipedia Main Keyword detected... Opening WikiPedia in browser... Speech Recognition thinks you said : Anil Main Keyword detected... Speech Recognition thinks you said : Anil open YouTube Main Keyword detected... Opening YouTube in browser... Speech Recognition thinks you said : Anil open YouTube search latest song Main Keyword detected...
5. Basler Camera Demo In latest release, we provided support for basler USB camera. We have tested with DAA2500-14UM (CS- MOUNT) - BASLER DART camera. Link: https://www.baslerweb.com/en/products/cameras/area-scan-cameras/dart/daa2500-14um-cs- mount/#tab=specs However, this demo app can be run with any Basler camera. Here we configure pylon5 software for our board.
Figure 17: Basler Pylon Viewer App Note: If We run App in maximize mode, then on left side we observed background display overlay. This is pylon app issue and it is observed on Linux as well as on board preview.
6. Face Recognition using Tensorflow Lite demo This application demo uses Haar Feature-based Cascade Classifiers for real time face detection. The pre-trained Haar Feature-based Cascade Classifiers for face, named as XML. TensorFlow Lite implementation for MobileFaceNets. The MobileFaceNets is re-trained with a smaller batch size and input size to get a higher performance on a host PC.
Press 5 : ARM NN Demo Press 6 : Basler Camera demo Press 7 : Face Recognition using TensorFlow Lite demo Press 8 : Object Recognition using Arm NN demo Select: (1/2/3/4/5/6/7/8) Welcome to Face Recognition using TensorFlow Lite demo Detecting Biggest Face in Real-Time Pleae provide Camera Node Entry Node entry e.g.
Page 44
When the demo is running, it will detect one biggest face at real time. Once the face is detected, you can click keyboards on the right of GUI to input the new person's name. Then, click 'Add new person' to add the face to data set. In brief, 1.
7. Object Recognition using Arm NN Demo In latest release, we have added support for ARM NN SDK and ARM Compute Library. This Demo run ARM NN example to test performance of ARM NN on our board. This demo contains samples for running inference and predicting different objects. It also includes an extension that can recognize any given camera input/object.
Page 46
Press 5 : ARM NN Demo Press 6 : Basler Camera demo Press 7 : Face Recognition using TensorFlow Lite demo Press 8 : Object Recognition using Arm NN demo Select: (1/2/3/4/5/6/7/8) Welcome to Object Recognition using Arm NN demo This is a demo to show how much accuracy in detecting an Object like Cat, Dog, Shark This will pick the image from data folder and provides the accuracy of respective image with this identity No.
Top(5) prediction is 23 with confidence: 0.0152533% = Prediction values for test #1 Top(1) prediction is 282 with confidence: 58.5935% Top(2) prediction is 200 with confidence: 0.0584546% Top(3) prediction is 139 with confidence: 0.0434935% Top(4) prediction is 134 with confidence: 0.0408567% Top(5) prediction is 133 with confidence: 0.0339192% Prediction for test case 1 (282) is incorrect (should be 283) = Prediction values for test #2...
This is a demo output shows how much accuracy in detecting an Object like Cat, Dog, Shark. This will pick the image from data folder and provides the accuracy of respective image with this identity Steps to run MIPI_Camera Demo: Run /run/media/mmcblk1p3/ARROW_DEMOS/run_ml_demos.sh script and select option 8 ,then select option 2.
Page 49
Press 8 : Object Recognition using Arm NN demo Select: (1/2/3/4/5/6/7/8) Welcome to Object Recognition using Arm NN demo This is a demo to show how much accuracy in detecting an Object like Cat, Dog, Shark This will pick the image from data folder and provides the accuracy of respective image with this identity No.
Using Wayland-EGL Using the 'xdg-shell-v6' shell integration Figure 24: Arm NN Object Recognition using MIPI camera output screen This runs the TfInceptionV3-Armnn test and parses the inference results to return any recognized object, not only the three expected types of animals. Show the provided flash cards to the camera and wait for the detection message: Image captured, wait.
TROUBLESHOOTING HDMI • Although we have provided HDMI hot plug detection feature, we must connect HDMI before we boot up the board. Because we observed that if we do not connect HDMI before board boot up, hot plug feature not working and even after connecting HDMI we don’t see any output on HDMI.
Need help?
Do you have a question about the iMX8XML and is the answer not in the manual?
Questions and answers