I love Nvidia’s new embedded computers. The Nvidia Jetson embedded computing product line, including the TK1, TX1, and TX2, are a series of small computers made to smoothly run software for computer vision, neural networks, and artificial intelligence without using tons of energy. Better yet, their developer kits can be used as excellent single board computers, so if you’ve ever wished for a beefed up Raspberry Pi, this is what you are looking for. I personally use the Jetson TX2, which is the most powerful module available and is widely used.
One of the big fallbacks with Jetson devices is that the documentation does not (and cannot) cover all use cases. The community has yet to mature to the point where you can find some random blog’s guide on any random thing you need to do (à la Raspberry Pi and Arduino), so you’ll often have to figure out things for yourself.
But, I am here to dispell the mystery around at least one thing — using CSI cameras on your TX2. These methods should work on other Jetson devices too!
We’re going to look at utilizing the Jetson’s image processing powers and capturing video from the TX2’s own special CSI camera port. Specifically, I’ll show you:
- Why you’d even want a CSI camera.
- Where to get a good CSI camera.
- How to get high resolution, high framerate video off your CSI cameras using
gstreamer
and the Nvidia multimedia pipeline. - How to use that video in OpenCV and ROS.
[mwm-aal-display]
Why CSI cameras (vs USB)?
CSI cameras should be your primary choice of camera if you are looking to push for maximum performance (in terms of FPS, resolution, and CPU usage) or if you need low-level control of your camera — and if you are willing to pay a premium for these features.
I personally use CSI cameras because I need high resolution video while maintaining acceptable framerate. With the TX2 and a Leopard Imaging IMX377CS I easily pull 4k video at ~20 fps. Awesome. I also like the ability to swap out lenses on CSI cameras, which typically use small format C-Mount or M12 lenses. Due to the popularity of the GoPro, there are plenty of C/CS-Mount lenses as well as lens adapters for converting DSLR camera lenses to C-Mount.
USB cameras, on the other hand, can be incredibly cheap, typically work out of the box via the V4L2 protocol, and are an excellent choice for applications where you don’t need high-performance video. You can get 720p video for only $20 using the Logitech C270, as California Polytechnic State University did in their well documented ‘Jet’ Robot Kit, which was enough to make their robot toy car identify and collect objects, find faces, locates lines, etc.
An awesome post on the Nvidia developer forums by user Jazza points out even further comparisons between USB and CSI cameras:
USB Cameras:
- Are easy to integrate.
- Can do a lot of the image work off-board (exposure control, frame rate, etc).
- Many provide inputs/interrupts that can help time your application (e.g. interrupt on new frame).
- Use CPU time due to USB bus, this will impact your application if it uses 100% CPU.
- Are not optimal for use of hardware vision pipeline (hardware encoders, etc).
- Can work over long distances (up to max of USB standard).
- Can support larger image sensors (1″ and higher for better image quality and less noise).
CSI Bus Cameras:
- Optimized in terms of CPU and memory usage for getting images processed and into memory.
- Can take full advantage of hardware vision pipeline.
- Short distances from TX1 only (10cm max usually) unless you use serialization systems (GMSL, FPD Link, COAXPress, Ambarella) which are immature and highly custom at the moment.
- Are mostly smaller sensors from phone camera modules but custom ones can be made at a price. The added noise from the smaller sensor can be mitigated a bit through the hardware denoise in TX1/2.
- Gives you access to low-level control of the sensor/camera.
I recommend you check out the full post for further insights, such as considerations for networked cameras.
Why do CSI cameras perform better than USB?
The biggest issue with USB is bandwidth and processing needs. USB 3.0 can push 5 Gbps, which is technically enough to allow push uncompressed 1080p video stream at 60fps or even 4k (3840p) at 20 fps (see for yourself). But, this based on bandwidth alone and does not reveal the truth of the additional processing and memory management bottlenecks in handling the video. For example, the See3CAM_CU130 USB 3.0 camera should be capable of 60fps 1080p, but in a real world test on the TK1 it only eked out 18 fps at 1080p compressed and a paltry 1fps uncompressed. While performance would be better on a more powerful machine, this is evidence of the problem.
In contrast, the Jetson TX1 and TX2 utilizes “six dedicated MIPI CSI-2 camera ports that provide up to 2.5 Gb/s per lane of bandwidth and 1.4 gigapixels/s processing by dual Image Service Processors (ISP).” In other words, it has the bandwidth for three 4k cameras (or six 1080p cameras at 30 fps). Again, bandwidth isn’t everything because those images need to be moved and processes, but by using the hardware vision pipeline the images skip loading into DRAM and reduces CPU load by processing video independently of the primary CPU. In my own experience, I’ve been able to run 4k video at ~20 fps by utilizing these hardware features on the TX2. This is why video works so efficiently through CSI cameras — independent hardware specialized for video, much like a GPU is specialized for 3D graphics.
Where to get CSI cameras (for Jetson devices)
In my own research, I’ve found only a handful of resources on finding CSI cameras. The Jetson Wiki has a decent page surveying different camera options and you may be able to find some tips on the Jetson developer forms, but that’s about it.
As for actual shops, there is:
- e-con Systems, who make both CSI cameras and USB cameras for Jetson devices.
- Leopard Imaging, who make CSI camera for Jetson.
Both of these companies are official imaging partners with Nvidia and provide the drivers and instructions needed to pull data from the camera.
I personally use the Leopard Imaging IMX377CS and find it quite capable. Plus, they have pretty good instructions for installing the drivers, which is always welcome.
Getting Video off a CSI camera
In Nvidia’s “Get Started with the JetPack Camera API” they explain that the best way to interface with the Jetson’s multimedia hardware (including the ports for CSI cameras) is via their libargus
C++ library or through gstreamer
. Nvidia does not support the V4L2 video protocol for CSI cameras. Since gstreamer
is well documented and very common, I’ve focused on it.
GStreamer is configured using pipelines, which explain the series of operations applied to your video stream from input to output. The crux of getting video from your CSI camera boils down to being able to (1) use gstreamer
in your program and (2) to use efficient pipelines.
A Note on Drivers: You will most likely need to install the drivers for your camera before any of the GStreamer functionality will even work. Since CSI cameras tend to be a smaller market, you might not find a guide online but should be able to get one from the manufacturer. Leopard Imaging, for example, provided a nice guide (over email) for setting up their drivers, but it only got me to the point of using GStreamer in the terminal. In this post, we’ll venture further and get that data into your code.
Selecting the right pipelines
As I just mentioned, one of the keys to getting quality performance with CSI cameras is using the most efficient gstreamer
pipelines. This generally means outputting in the correct format. You will see me repeatedly use a pipeline along the lines of:
nvcamerasrc ! video/x-raw(memory:NVMM), width=1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1 ! nvvidconv flip-method=2 ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink
The very important part here is video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR
, which ensures that the raw video from the CSI camera is converted to the BGR color space.
In the case of OpenCV and many other programs, images are stored in this BGR format. By using the image pipeline to pre-convert to BGR, we ensure that those hardware modules are used to convert the images rather than the CPU. In my own experimentation, using a pipeline without this conversion results in horrible performance, at about 10fps max for 1080p video on the TX2.
Command line tools
There are a few command line tools I’ll briefly note.
nvgstcapture
nvgstcapture-1.0
is a program included with L4T that makes it easy to capture and save video to file. It’s also a quick way to pull up the view from your camera.
gst-launch
You can run a GStreamer pipeline with gst-launch-1.0 <pipeline>
.
Example 1: View 1080p video from your camera
gst-launch-1.0 nvcamerasrc ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)60/1' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=(string)I420' ! nvoverlaysink -e
Example 2: View 1080p video from your camera and print the true fps to console.
[crayon wrap=”true”]
gst-launch-1.0 nvcamerasrc ! ‘video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)60/1’ ! nvvidconv ! ‘video/x-raw(memory:NVMM), format=(string)I420’ ! fpsdisplaysink text-overlay=false -v
[/crayon]
Check out this Gstreamer pipelines for Tegra X1 guide for more example pipelines.
gst-inspect
You can inspect pipeline elements with gst-inspect-1.0
Example: Inspect the capabilities of the CSI camera interface.
[crayon]
gst-inspect-1.0 nvcamerasrc
[/crayon]
OpenCV
Alright, so let’s start capturing video in our own code rather than just messing with stuff in the terminal.
When setting up your Jetson device, Nvidia Jetpack installs a special, closed source version of OpenCV called OpenCV4Tegra, which is optimized for Jetson and is slightly faster than the open source version. While it is nice that OpenCV4Tegra runs faster than plain OpenCV 2, all versions of OpenCV 2 do not support video capture from gstreamer
, so we won’t be able to easily grab video from it.
OpenCV 3 does support capturing video from gstreamer
if you compile it from source with the correct options. So we’ll replace OpenCV4Tegra with a self-compiled OpenCV 3. Once this is done, it is quite easy to capture video via a gstreamer
pipeline.
Compiling OpenCV 3 with GStreamer support on Nvidia Jetson
- Remove OpenCV4Tegra by running:
[crayon]
sudo apt-get purge libopencv4tegra-dev libopencv4tegra
sudo apt-get purge libopencv4tegra-repo
sudo apt-get update
[/crayon] -
Download Jetson Hacks’ Jetson TX2 OpenCV installer:
[crayon]
git clone https://github.com/jetsonhacks/buildOpenCVTX2.git
cd buildOpenCVTX2
[/crayon](More info on this script at Jetson Hacks’ own install guide.)
-
Open
buildOpenCV.sh
and change the line-DWITH_GSTREAMER=OFF \
to-DWITH_GSTREAMER=ON \
. This will ensure OpenCV is compiled withgstreamer
support. -
Build OpenCV by running the install script. This will take some time.
[crayon]
./buildOpenCV.sh
[/crayon]Jetson Hacks also warns that “sometimes the make tool does not build everything. Experience dictates to go back to the build directory and run make again, just to be sure.” I recommend the same. Check out their video guide if you really need help.
-
Finally, switch to the build directory to install the libraries you just built.
[crayon]
cd ~/opencv/build
sudo make install
[/crayon]
Video Capture from GStreamer pipeline in OpenCV
We now have an installation of OpenCV that can capture video from gstreamer
, let’s use it! Luckily, I have a nice C++ example script on Github designed to capture and display video from gstreamer with OpenCV. Let’s take a look.
First, we define an efficient pipeline to use, using Nvidia’s nvcamerasrc
interface and ensuring we pre-convert to BGR color space. Then we define a capture object that uses GStreamer. Finally, we capture each frame and display it in an infinite loop. That’s it!
I also have another script for testing FPS at various different resolutions and FPS.
Note to ROS users: If you have ROS installed, you are likely to find this script does not work. This is typically due to the fact that your
~/.bashrc
includes the linesource /opt/ros/<distro>/setup.bash
near the end, as the ROS install guide recommends. This causes ROS to import its own ROS version over your install. If you only need video capture, then just usejetson_csi_cam
from the next section with ROS. If you don’t need ROS for some particular application, one workaround this dependency issue is to remove the line from~/.bashrc
and run the setup file only when you need ROS.Custom OpenCV and ROS together? You should not try to use a self compiled OpenCV install in a ROS program. ROS uses it’s own version of OpenCV called opencv2 and opencv3 that involved in a tangled web of dependencies that is hard to avoid. If you are willing to let things get messy, it is possible to use your own version of OpenCV, but I really would not recommend it.
Robot Operating System (ROS)
Getting your CSI camera up and running in ROS is even easier than OpenCV. All you need to do is install my ROS package, jetson_csi_cam
(the README will guide you through all the steps you need). It works in the same way as the OpenCV solution but uses a different library for grabbing video from gstreamer and provides extra niceties expected in ROS. Importantly, it also uses the correct, efficient pipelines for CSI cameras.
I have a TX-1 and I would like to know if I can build opencv 3 on the TX1?
Yes, you should be able to follow the same steps, for the most part.
The TX2 and TX1 are generally the same and share software compatibility, since they run the same Linux version. Double checking Jetson Hacks’s guide, the only thing I can think is you might need to change the line
make -j6
at the bottom ofbuildOpenCV.sh
tomake -j4
since the TX1 only has 4 cores.Let me know how it goes!
Thanks for this nice guide.
One note: the flip-method=2 of nvvidconv is no longer useful since L4T R28.1 (JetPack3.1). Now nvcamerasrc provides the image with correct orientation.
Nice tutorial! By the way, OpenCV 2 does support Gstreamer in the exact same way as OpenCV 3: it won’t support it by default but if you’ve installed the Gstreamer dev packages before you run cmake & make to build OpenCV then your OpenCV 2 or 3 will include full Gstreamer support. But like you mentioned, unfortunately OpenCV4Tegra doesn’t come pre-built with Gstreamer support.
How can I tell what version of OpenCV I have on my TX1?
I would run a C++ scrip and print out the version.
Command: python -c ‘import cv2; print(cv2.version)’
This should print out the version for you.
cv2.
two-underscores
versiontwo-underscores
Hi Peter,
Thanks for the great tutorial. I have some Leopard Imaging cameras for my TX2. I followed their guide on installing drivers, but I’m still unable to access the cameras with nvgstcapture-1.0 (as they recommend). Instead I get:
Inside NvxLiteH264DecoderLowLatencyInitNvxLiteH264DecoderLowLatencyInit set DPB and MjstreamingInside NvxLiteH265DecoderLowLatencyInitNvxLiteH265DecoderLowLatencyInit set DPB and MjstreamingSocket read error. Camera Daemon stopped functioning…..
gst_nvcamera_open() failed ret=0
** (nvgstcapture-1.0:21967): CRITICAL **: can’t set camera to playing
** (nvgstcapture-1.0:21967): CRITICAL **: Capture Pipeline creation failed
Any suggestions? did everything work as expected when you installed them?
Thanks,
Yusuf
Hmm, I can think of a few things, though I can’t be sure.
Are you using the TX2 guide? I don’t know if it matters, but they initially provided me with a guide for the TX1 and then gave me another one for the TX2 after I asked.
Do you have the camera plugged into channel 1? I had error when it was not.
I assume I have the correct one, it’s titled IMX185_TX2_20170607.txt.
I have the 6 camera module (http://shop.leopardimaging.com/product.sc?productId=334&categoryId=44) with all 6 cameras plugged in.
I’m trying to troubleshoot what I may have done wrong. If you think of anything else, let me know. I really appreciate the help.
For reference, I flashed the board with l4t R27.1 using jetpack 3.0 then followed their steps (coping Image, zImage, dtb file, 4.4.15-tegra-leopard module and camera_overrides.isp to the appropriate locations on the TX2 and editing extlinux.conf).
I’d contact Leopard Imaging, they were a ton of help in the past. It sounds like you’ve done everything right.
Hi Peter,
I followed your Tutorial. I tried to compile “another script for testing FPS at various different resolutions and FPS.” but filed and show me the following error:
In file included from /usr/include/c++/5/chrono:35:0,
from main.cpp:2:
/usr/include/c++/5/bits/c++0x_warning.h:32:2: error: #error This file requires compiler and library support for the ISO C++ 2011 standard. This support must be enabled with the -std=c++11 or -std=gnu++11 compiler options.
#error This file requires compiler and library support \
can you explain me why?
Best,
PD
Because that program uses the
chrono
library, it needs C++ 11 support. Try enabling it with the compiler options-std=c++11
or-std=gnu++11
or update your compiler.Hi Peter,
Thanks for nice guide. But I have a question. Do you know How many camera can be used through usb port on tx2?
Regards.
Eren
The TX2 dev board only has one USB 3 port. While you can use a USB hub, it still only allows the same amount of bandwidth, so you can only connect as many cameras as USB 3.0 bandwidth allows, aka 5 Gbps. This is technically enough to allow push uncompressed 1080p video stream at 60fp. However, the ideal is not typically achieved.
Thus as rule of thumb, I’d feel safe using two 720p cameras on one port. More might become problematic.
If you want to use more cameras, you’ll need to get more USB ports via expansion. In the end, however, MIPI cameras will give you better performance than USB.
There is also good discussion over here in the Nvidia forums: https://devtalk.nvidia.com/default/topic/1024995/connecting-multiple-usb-3-0-cameras-to-a-tx1-tx2/
Great explanation! Thanks for reply.
I get a compile problem on my TX2 attempting to install jetson_csi_cam, it can’t find a couple of packages:
CMake Warning at /opt/ros/kinetic/share/catkin/cmake/catkinConfig.cmake:76 (find_package):
Could not find a package configuration file provided by
“camera_calibration_parsers” with any of the following names:
Offhand, do you know what I need to install or what the path is?
It seems like you might not have the
image_common
package, which includescamera_calibration_parsers
.It is possible you did not install these while setting up ROS. There are a few different Linux packages, and unless you installed
ros-kinetic-desktop-full
, you do not have the required packages (though I’m not sure whygscam
does not complain).Try running
or get eveything with
More details about what gets installed for each option can be found here.
That did the trick for compiling it; I had installed ros-kinetic-desktop (not full), and installing ros-kinetic-perception manually worked. I am getting an error message when I launch, though. Do you know where I should look for the cause of this error?
auto-starting new master
process[master]: started with pid [2530]
ROS_MASTER_URI=http://localhost:11311
setting /run_id to ccf8d9aa-d7b3-11e7-9664-00044b6692f9
process[rosout-1]: started with pid [2543]
started core service [/rosout]
process[csi_cam-2]: started with pid [2546]
(gscam:2546): GStreamer-CRITICAL **: gst_element_make_from_uri: assertion ‘gst_uri_is_valid (uri)’ failed
Unfortunately, I know about the error but don’t know why it happens. Are you getting the video stream? The video output still worked for me.
When I discovered it, I marked it as an issue on GitHub but did not get the chance to iron it out.
Ah, okay. I do pick up the image stream; when I run rqt in another window, there it is. So it works! Cool. Thanks for the help!
Great to hear! Good luck with your work, it’s quite exciting to see a project get reused.
Do you know how I can set the camera exposure, white balance, etc.?
Hmm, no, I haven’t experimented with that. White balance can be done in post, but the drivers might provide a way to control aperture and shutter.
We never managed to get it to work with Gstreamer and IMX477 cameras. There is other option – to use Argus library, but it works on lower level and transferring captured frames to OpenCV (especially Python) isn’t so straightforward.
Hello,
Thanks for your post, it’s very informative. Do you have any experience/insight on how to use an HDMI camera with the TX2? There’s an HDMI to CSI-2 adaptor by Auvidea (B10x) but no driver for the TX2 at the moment. Are you maybe aware of any workarounds?
Thanks
Vivi
I don’t have too much experience with HDMI capture, but looking at the B10X spec sheet, it says it supports the TX1. In that case, it should support the TX2.
Otherwise, I would recommend going with a HDMI to USB capture device, but the CSI capture looks really interesting and should get better performance.
Unfortunately, it seems the B10x doesn’t support TX2 at the moment because the drivers are under development by auvidea with no release date information. Thanks for your reply!
Just in case, it might be good to double check with their support! As far as I know, the TX2 is fully compatible with the TX1.
Hi, great article! I’ve only recently run into issues using ZR300 on a TX2 (V4L2), enabling USB3, patching the kernel, only to realise that Intel has abandoned ZR300.
So I started looking at camera modules from China. Seems to be an abundance of them. Just wondering, apart from drivers and gstreamer, do I need anything else? The alibaba camera modules of various makers do not seem to include any kind of logic board or circuit board, they only have a CMOS sensor, the lense and a 24pin golden finger connection. Is that enough to plug into the SDK board (or any carrier board for that matter)?
Unfortunately, I don’t have any experience with sourcing my own sensors and connecting them to the TX2. I’ve mostly stuck with stuff from Leopard Imaging and See3Cam because I knew hey should work.
The TX2 does not have a way to plug in a 24pin golden finger connection directly. In fact, it seems there is not really a standardized connector for CSI/MIPI connection, so it would be up to you to connect all the pins correctly. I don’t think there is a need for a logic board as much as a breakout board.
However, even if you were to wire things up correctly, your biggest problem would be getting the right drivers for the camera that will work on the TX2. That’s the main reason I would avoid a DIY approach without seeing someone with more experience try it first.
Hi Peter, thanks for the reply.
This is why I am confused. I am looking at similar cameras such as the one for raspberry pi 3, and it is indeed a CSI connection which plugs into the board. However I am guessing that the 24Pin gold connection is not a CSI-2/MIPI cable, and that some kind of adapter is needed?
I browsed e-consystems (I think there’s links about them on the NVIDIA forums) and I found quite a few cameras which are in a much more reasonable range ($25-$45) and claim to connect to TX1/TX2. Then I also found that Auvidea has CSI-2/MIPI adapters (branded by Toshiba) so I am going to make the assumption that indeed the 24Pin gold output from the Chinese cameras is not the correct one.
You are correct about the drivers though, this could easily lead to being a nightmare.
Best regards,
A.
the example compile command: $ gcc -std=c++11 gstreamer_view.cpp -o gstreamer_view -L/usr/lib -lstdc++ -lopencv_core -lopencv_highgui -lopencv_videoio
How to capture from GPU?
Hi Morgan Peter,
I would like to know if it is possible to read a video directly from the GPU with Open CV, using Jetson onboard TX2 camera, without capture the image with CPU and then upload it to the GPU.
Could you suggest some tutorials to follow or some codelines?
Thanks you so much for great tutorial
I don’t know much about capturing video straight to the GPU, but this conversation on the nvidia forms seems like a start: https://devtalk.nvidia.com/default/topic/987076/gpudirect-rdma-on-jetson-tx1-/
thanks you Mr. Moran, sound like interesting
You can use gstreamer plugin nvivafilter for processing with opencv gpu functions from NVMM memory. Check this topic: https://devtalk.nvidia.com/default/topic/1022543/jetson-tx2/gstreamer-nvmm-lt-gt-opencv-gpumat/post/5208232/#5208232
Excellent tutorial. Thank you. I am just coming up to speed (slowly) on MIPI-CSI II and Jetson platform
Was curious about possibility of capturing lower frame (but high resolution) rate images to an attached storage device while also streaming 24-30 fps H.264 video of same source from an individual 2-lane channel. Is it possible to run two types of compression on the same stream or does one need some other piece of hardware to “split” the CSI-II stream (FPGA/ MPsoC)?
The compression on the low frame rate stills could be JPEG/PNG;
Streaming video is H.264 compression.
Cameras are 1/3″ CMOS, 1920 x 1080, 30 fps 4:2:2 (10 bit color depth) over MIPI-CSI II D-PHY physical layer.
Thanks for your time
Have you found a decent carrier board that 1)Is the size of the TX2 module, and 2)has video inputs that accept the Ipex format CSI cable that Leopard uses? I’m having a really hard time finding a carrier board that isn’t gigantic and has 2x CSI interfaces with Ipex style connections. -Jeremy
Thanks for the fantastic article! Do you happen to know how I can integrate one of these CSI / MIPI cameras using a smaller carrier board (such as Auvidea J100 or e-conn Elroy carriers)? From everything I read on the TX2 forums, this seems to be a major problem with those boards.
I tried adapting your gst-launch-1.0 command to save video to disk (instead of displaying it). When I run the following, I get a zero-length video file. Any idea what I’m doing wrong?
gst-launch-1.0 -e nvcamerasrc ! ‘video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)60/1’ ! nvvidconv ! ‘video/x-raw(memory:NVMM), format=(string)I420’ ! multifilesink location=test.mp4
Hi Peter,
Great tutorial! Thanks!
When I capture video from GStreamer pipeline in OpenCV I have CPU load 2-3x more than when I display video with nvgstcapture from terminal. Could you please tell me if it is a normal behavior or not and how can I fix that?
I can not for the life of me find were the ./gscam/makefile is. any one know?
When you follow these instructions: https://github.com/peter-moran/jetson_csi_cam/blob/master/README.md#2-install-gscam-with-gstreamer-10-support
It clones the gscam repository in the current directory. You can then find it with
ls ./gscam/Makefile
Actually your mistake might be that you are using ‘makefile’ instead of ‘Makefile.
Thanks for the great article.
I followed your OpenCV instructions, it took about 4 hours to get to 65% built, so I turned off sleep mode and left the compile running overnight, this morning it had gotten to 96% but hasn’t budged in the last 5 hours. How long did it take you to compile? Do you know if there’s a way to check if the program is stuck?
Looks very long…Be sure to build on a disk having enough space. You may also add swap. You would close any other resource consuming application (web browers…)
It is a good idea to boost your Jetson before with:
sudo nvpmodel -m0
sudo /home/nvidia/jetson_clocks.sh
You may use make -j4 on TK1/TX1 or -j6 on TX2 for using parallel make, but in some opencv/L4T versions, you may end up with several nvcc in parallel resulting in memory outage. In such case, just retry with a lower value for make option j.
It just finished compiling! Took 45 hours haha.
COuld you please let mek now if I can use this camera with Jetson TX2? https://www.amazon.com/Arducam-LS-40180-Fisheye-M12x0-5-Raspberry/dp/B013JWEGJQ
Hi Peter, this was a great article. Have you ever tried using multiple csi cameras? And do you know if there is any documentation/guides for this?
Thanks! I have not personally tried multiple cameras, but I know others have before. We just merged in a feature to the GitHub repo that exposes access to multiple cameras through nvidias tools though.
Hi Peter,
I’ve studied the e-consystems test chart and it appears that they are getting roughly what one would expect (save for a couple of outliers) with a USB 3 bus – 200-400 MB/sec transfer rates.
Unlike USB 2.0 with 3.0 you can transfer data direct to memory with no-CPU (so I’ve read).
It appears that the limiter with USB 3 isn’t the CPU but the USB 3 bus.
So my question is, if one uses 2 USB 3.0 cameras but with 2 separate USB 3 buses, can the CPU handle the load? It seems to me the limiter is still the USB bus but not the CPU – and maybe the SSD saving the images depending on the SSD.
I am confused. Looks like you are using a Jetson TX2 developer kit. It already has a csi camera board one it. Can you just detach it, put it in place where you want it to be, and link it back to TX2 with a cable?
Hello, this may be a silly question but is Jetson_CSI_Cam compatible with ROS Noetic? I have gotten the source code to compile and the node to run; however, The node does not publish /csi_cam/image_raw. I am confident the camera drivers are on and operating correctly.
Hello, simple questions. Noticed you have a tubular wire cable between the camera and the board. I’m having a difficult time finding a vendor who sells those. Would you mind telling me where you got it? Thanks!