Useful Linux Commands for AI Development Series #3 (Advanced)

This article is the last post in the Command Tutorial Series. In Series #1 (Basic), we have walked through some useful commands for topics including File System Basics, File System Permission & Ownership, SSH (Remote Control), and Monitor System Loads (CPU, GPU, Memory). In Series #2 (Intermediate), we have walked through some useful commands for topics including Symbolic Links, Screen, Python pip installation and management, and Git Commands.

In this article, we will talk through more topics such as Shell Script, ONNX-TensorRT Conversion, Anaconda 3, and CUDA Setup.

Shell Script


The first thing to do after you create an empty Bash Script is to append the following line to the first line of your script


If it is a Python-based script, you may insert the following line to the first line of your script


# To assign a value to a variable
variable=123  (no space is allowed aside "=")

# To parse a variable or to use it
#eg #1: 
new_value = ${variable}
#eg #2: 
temp=$(vcgencmd measure_temp | egrep -o '[0-9]*\.[0-9]*')
echo "Current temperature : $temp C"

# To pass an argument from console to a variable inside the script
$1, $2, $3....$n refers to the input arguments from the console
./ A B C, $1 refers to A; $2 refers to B

#if statement
if condition

#if else
if condition

#  sign            reference
#  -eq	             equal
#  -ne	           not equal
#  -gt	      n1 greater than n2
#  -lt	        n1 less than n2
#  -ge	  n1 greater than or equal to n2
#  -le	    n1 less than or equal n2

# eg #1:

# if the parameter is not equal to 1, then execute the echo mand
if [ "$#" -ne 1 ]; then
    echo "Usage: $0 <Install Folder>"

# Note: make sure you have "space" placed aside the sign as the example shown. 
# If you do if [ "$#"-ne1 ] or [ "$#"!=1 ] or ["$#" -ne 1], it is not going to work. 
# ** Space does matter.

# eg #2:

# else if
if [ $1 -ge 18 ]
echo You may go to the party.
elif [ $2 == 'yes' ]
echo You may go to the party but be back before midnight.
echo You may not go to the party.

# eg #3: 

# multiple conditions AND example
if [ -r $1 ] && [ -s $1 ]
echo This file is useful.

# eg #4: 
# multiple conditions OR example
if [ $USER == 'bob' ] || [ $USER == 'andy' ]
ls -alh

I recommend check out more examples here.

Save and Export Absolute Path
# Saves the absolute path of the current working directory to the variable cwd
$ cwd=$(pwd)
$ echo ${PWD}

# Export current PATH
$ export PATH=$(pwd)+somethingelse

# Export a specific PATH
$ export PATH=${HOME}/PATH
$ cd $PATH
User Input
# Ask the user for login details
read -p 'Username: ' uservar
read -sp 'Password: ' passvar
echo Thank you $uservar we now have your login details

* -p which allows you to specify a prompt
* -s which makes the input silent.

# Method #1 with read -p
read -p -> Put the package # you wish to install here: num

# Method #2 with echo -n
echo -n " -> Put the package # you wish to install here: $num"
read num

#Terminal (output of Method #1)

[email protected]: ./
Username: ryan
Thankyou ryan we now have your login details
[email protected]:



# Define a basic function
print_something () {
echo Hello I am a function

# Execute the function

# Terminal Output
[email protected]: ./
Hello I am a function
Hello I am a function
[email protected]:

# Execute a script with "source"
# The shortcut of "source" is ". ", there is a space in between
# eg: 
# OR

# However, if you want to write commands into ~/.bashrc, you need to use source ~/.bashrc

# Execute a script with "./"
# You need to give executable permission to your file
$ sudo chmod +x
$ ./

# Run a script as a command without "./" and ".sh"
$ sudo cp /usr/bin/test
$ test

ONNX-TensorRT Conversion for YOLOv3


ONNX is an open format built to represent machine learning models. ONNX defines a common set of operators – the building blocks of machine learning and deep learning models – and a common file format to enable AI developers to use models with a variety of frameworks, tools, runtimes, and compilers. LEARN MORE.

NVIDIA TensorRT™ is an SDK for high-performance deep learning inference. It includes a deep learning inference optimizer and runtime that delivers low latency and high-throughput for deep learning inference applications. LEARN MORE.


I found a very useful repo for those who want to convert your ONNX model to the TensorRT Engine and run real-time inference applications with TensorRT. Check out the repo here.



This repo should work for both ARM64 and X86 computers as long as your environment setup meets all the requirements in the repo.

I strongly recommend setting up a virtual environment or conda environment for this project because you might encounter packages conflict issues.

# Check Tensorrt if already installed
$ dpkg -l | grep TensorRT

# Install ONNX
$sudo apt install protobuf-compiler libprotoc-dev
$pip install onnx==1.4.1

# Check tensorRT for Python3
$ pip list
# If it has not been installed yet, 
# for x86 computer users, you will need to check out the Nvidia website for the installation guide.
# For Jetson users, you will need to copy the local packages to the environment libraries.
cd /usr/lib/python3.6/dist-packages/

# Copy the packages below

# to ~/.virtualenvs/envname/lib/python3.6/site-packages
# or to ~/archiconda3/envname/lib/python3.6/site-packages
# or to ~/.conda/envname/lib/python3.6/site-packages
# envname is the name of your environment

# Download the source code
$ git clone

# Install pycuda
$ pip install pycuda

# Download yolov3 weight
$ cd ${HOME}/project/tensorrt_demos/yolov3_onnx
$ ./

# Convert yolov3-416 model to onnx,
$ python3 --model yolov3-416

# Convert yolov3-416.onnx to yolov3-416.trt
$ python3 --model yolov3-416

# Download the testing image
$ wget -O ${HOME}/Pictures/dog.jpg
# Run the test
$ python3 --model yolov3-416 --image --filename ${HOME}/Pictures/dog.jpg

# Or to open a USB Camera live show
$ python3 --model yolov3-416 --usb --vid 0 --height 720 --width 1280

# Notes: for  opening a USB webcam, you need to modify a camera config file,
# located in ~\tensorrt_demos\utils\
# Set USB_GSTREAMER to false
USB_GSTREAMER = False		# True to False 

# Or to use a media file as input
# Add the following attribute to run the
--file --filename shinjuku.mp4
# It should look like
$ python3 \
    --model yolov3-tiny-416 \
    --height 720 --width 1280 \
    --file --filename shinjuku.mp4

Archiconda 3

Archiconda3 is a distribution of conda for 64 bit ARM. Anaconda is a free and open-source distribution of the Python and R programming languages for scientific computing (data science, machine learning applications, large-scale data processing, predictive analytics, etc.), that aims to simplify package management and deployment. Like Virtualenv, Anaconda also uses the concept of creating environments so as to isolate different libraries and versions.

For more information about the setup and usage of Archiconda 3, please visit the site here.

CUDA Setup

CUDA stands for Compute Unified Device Architecture, and is an extension of the C programming language and was created by Nvidia. Using CUDA allows the programmer to take advantage of the massive parallel computing power of an Nvidia graphics card in order to do general-purpose computation.

For Jetson Devices, you do not need to install CUDA and cuDNN because they are pre-installed in the JetPack. The setup below is mainly for computers running x86-based operation system and equipped with one or more Nvidia Graphic Cards.

Notes: If you want to set up CUDA on your cloud server, using the following setup should work.

CUDA 10.0
# Installation Instructions:
$ wget
$ sudo dpkg -i cuda-repo-ubuntu1804_10.0.130-1_amd64.deb
$ sudo apt-key adv --fetch-keys
$ sudo apt-get update
$ sudo apt-get install cuda

$ echo 'export CUDA_HOME=/usr/local/cuda' >> ~/.bashrc
$ echo 'export PATH=$PATH:$CUDA_HOME/bin' >> ~/.bashrc
$ echo 'export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CUDA_HOME/lib64' >> ~/.bashrc

$ source ~/.bashrc

# Test if everything is working fine.

$ nvcc --version
$ nvidia-smi

cuDNN 10.0
$ wget
$ sudo tar -xzvf cudnn-10.0-linux-x64-v7.5.0.56.tgz -C /usr/local/
$ sudo chmod a+r /usr/local/cuda/include/cudnn.h

# install python-nvcc plugin
$ sudo apt install python3-pip
$ sudo -H pip3 install --upgrade pip
$ sudo apt-get install unzip
$ sudo pip install git+git://

# check if installed successfully
$ sudo /usr/local/cuda/bin/nvcc --version

That’s all the contents for the Useful Linux Commands for AI Development Series. I hope you find something useful for your project development. Good Luck !

More Posts

Leave a Reply