The Kubernetes-based, containerized application, is now available on the NVIDIA NGC Catalog – a GPU-optimized hub for AI and HPC containers.
Training AI models is an extremely time-consuming process. Without proper insight into a feasible alternative to time-consuming development and migration of model training to exploit the power of large, distributed clusters, training projects remain considerably long lasting. To address these issues, Samsung SDS developed the Brightics AI Accelerator. The Kubernetes-based, containerized application, is now available on the NVIDIA NGC catalog – a GPU-optimized hub for AI and HPC containers, pre-trained models, industry SDKs, and Helm charts that helps simplify and accelerate AI development and deployment processes.
The Samsung SDS Brightics AI Accelerator application automates machine learning, speeds up model training and improves model accuracy with key features such as automated feature engineering, model selection, and hyper-parameter tuning without requiring infrastructure development and deployment expertise. Brightics AI Accelerator can be used in many industries such as healthcare, manufacturing, retail, automotive and across different use cases spanning computer vision, natural language processing and more.
Key Features and Benefits:
Is case agnostic and covers training all AI models by applying autoML to tabular, CSV, time-series, image or natural language data to enable analytics; image classification, detection, and segmentation; and NLP use cases.
Offers model portability between cloud and on-prem data centers and provides a unified interface for orchestrating large, distributed clusters to train deep learning models using Tensorflow, Keras and PyTorch frameworks as well as autoML using SciKit-Learn.
AutoML software automates and accelerates model training on tabular data by using automated model selection from Scikit-Learn, automated feature synthesis, and hyper-parameter search optimization.
Automated Deep Learning (AutoDL) software automates and accelerates deep learning model training using data-parallel, distributed synchronous Horovod Ring-All-Reduce Keras, TensorFlow, and PyTorch frameworks with minimal code. AutoDL exploits up to 512 NVIDIA GPUs per training job to produce a model in 1 hour versus 3 weeks using traditional methods.
Guess what’s back? Back again? GFN Thursday. Tell a friend. Check out this month’s list of all the exciting new titles and classic games coming to GeForce NOW in March. First, let’s get into what’s coming today. Don’t Hesitate It wouldn’t be GFN Thursday if members didn’t have new games to play. Here’s what’s new Read article >
The project uses the NVIDIA Jetson Nano Developer Kit to recognize hand gestures and control a robot dog without a controller.
James Bruton of XRobots was awarded the ‘Jetson Project of the Month’ for OpenDog V2. This project uses the NVIDIA Jetson Nano Developer Kit to recognize hand gestures and control a robot dog without a controller.
James, a robot inventor, thought it’d be nice if his OpenDog robot responded to hand gestures. To make this happen, he used transfer learning to retrain an existing SSD-Mobilenet object detection model using PyTorch. During the training process, he identified five hand gestures for the robot to move forward, backward, left, right and to jump. Using the camera capture tool, he captured these gestures and assigned them to the appropriate class.
He ensured that these images were captured at a specific distance from the camera to make sure the OpenDog doesn’t get distracted by hand gestures or similar patterns in the background.
James notes that the project can be improved by adding more training data which includes gestures in different indoor and outdoor backgrounds and from different users. Furthermore, he plans to convert OpenDog to a ROS robot similar to his Really Useful AI Robot. He created a series of videos to show his journey of building this project and the code is available on GitHub.
NVIDIA Clara Parabricks will be available on SHIROKANE, HGC’s fastest supercomputer for life sciences in Japan.
The Human Genome Center (HGC) at University of Tokyo announced a new genomics platform to accelerate genomic analysis by 40X, utilizing NVIDIA Clara Parabricks Pipelines genomics software powered by NVIDIA DGX A100 GPUs. The platform operates on SHIROKANE, HGC’s fastest supercomputer for life sciences in Japan, and will be available to users on April 1, 2021. SHIROKANE helps researchers quickly process massive amounts of genomic data and is incredibly powerful with many nodes, a capacity of over 400 TFLOPS, and a storage capacity of over 12PB. The ultimate goal of analyzing so much genomic data is to glean insights about germline and somatic variants to move closer to precision medicine.
Today, patients are prescribed medicines that work for the majority of people, but are often ineffective as they are not tailored to a specific patient’s genetic profile. Precision medicine aims to provide more specific therapeutics for patients, utilizing information from whole genome sequencing and other clinical data. As a national strategy, Japan’s Ministry of Health, Labor and Welfare formulated the Execution Plan for Whole Genome Analysis in December 2019, to focus on the areas of cancer and intractable diseases. The plan will take up to 3 years, aims to sequence 92,000 patients, and will ultimately help create a database that will be utilized by research institutions, pharmaceutical companies, and university hospitals for drug development and disease prevention.
Whole genome sequencing (WGS) has been widely recognized for its comprehensive analysis, and its increasing usefulness in areas such as infectious diseases and cancer. WGS examines the complete DNA of an organism while exome sequencing examines the protein coding regions or genes, which make up about 1.5% of the human genome. WGS requires several times the sequence depth and can be done quickly with accelerated genomic analysis, like with NVIDIA CLARA Parabricks Pipelines.
Professor Kiyoya Imoto, Director of HGC said, “The Institute of Medical Science at the Human Genome Analysis Center has been working on refining whole-genome data analysis and shortening the analysis time in cancer genomic medicine. This time, we evaluated Parabricks for implementation on all GPU servers on SHIROKANE. Its high speed and functions are indispensable for the future of large-scale whole-genome analysis. The whole-genome data analysis capability is equivalent to hundreds of conventional CPU servers and was implemented on the GPU server. We will realize a state-of-the-art high-speed whole-genome data analysis environment that greatly accelerates genome research for SHIROKANE users.”
Clara Parabricks Pipelines’ accelerates genomic analysis by utilizing the parallel computing performance of GPUs. Many germline and somatic callers have been accelerated in Clara Parabricks Pipelines including Google’s DeepVariant, which identifies genome variants in sequencing data using convolutional neural networks (CNN). Previously, whole genome analysis typically would take 20 hours or more per sample in a general CPU environment, however on SHIROKANE, powered by NVIDIA DGX A100s GPUs, the analysis takes less than 30 minutes. HGC put Parabricks Pipelines in production on 16 of the 80xNVIDIA V100 GPUs installed on SHIROKANE in February 2020 and is open to users from life science companies.
The genomic analysis proved to be faster than expected, and with the increasing number of users accessing SHIROKANE, there was a need to further super power SHIROKANE. Eight NVIDIA DGX A100 systems were recently added in 2021 to SHIROKANE, for a total of 88xGPU servers coupled with Parabricks Pipelines to accelerate large-scale genomic workloads. In addition, SHIROKANE provides free access to researchers working on SARS-CoV-2, in an effort to expedite insights about the virus and those infected by the virus. A joint research group called “The Corona Suppression Task Force” formulated at HGC will consist of experts from seven universities and research institutions to focus on various new coronavirus infections.
NVIDIA DGX A100 is the universal system for all AI workloads, offering unprecedented compute density, performance, and flexibility in the world’s first 5 petaFLOPS AI system. It features NVIDIA A100 Tensor Core GPUs, enabling customers to consolidate training, inference and analytics into a unified, easy to deploy infrastructure.
“NVIDIA has been investing for several years in anticipation of the coming era of large-scale whole-genome analysis,” commented Masataka Osaki, NVIDIA Japan Country Manager and VP Corporate Sales. “The greatest achievement, Parabricks, along with the latest DGX A100 system, is greatly helping Japan’s premier cancer genome research center. NVIDIA’s platform will be the foundation that supports whole-genome research in Japan, and it is expected that the elucidation of genes associated with cancer and intractable diseases will progress dramatically.”
Seiya IMOTO, Director of the Institute of Medical Science at the University of Tokyo, is presenting a talk titled “Realization of Genomic Medicine Based on Whole Genome Information” at the GTC21 conference April 12-16, which is free this year. Register here.
NVIDIA recently launched the Jetson Nano 2GB Developer Kit Grant Program which offers limited quantities of Jetson Developer Kits to professors, educators and trainers across the globe.
NVIDIA recently launched the Jetson Nano 2GB Developer Kit Grant Program which offers limited quantities of Jetson Developer Kits to professors, educators and trainers across the globe.
Ideal for hands-on teaching, the Jetson Nano 2GB Developer Kit is the perfect tool for introducing AI and robotics to all kinds of learners, from high school students to post-graduates. We provide all of the resources that educators need to get started, including free tutorials, an active developer community and ready-to-build open-source projects.
New to AI? Teachers possessing a basic familiarity with Python and Linux can get up to speed quickly by taking advantage of our online Jetson AI Courses and Certifications. We’re here to help you get fully prepared to teach AI to your students.
This program is available to educators, including professors, advisors, club organizers, and other relevant faculty members. In order to be considered for the program, applicants must share a detailed proposal including the purpose of their request and the expected impact of their planned project or curriculum.
Jetson Nano 2GB Developer Kit Grant recipients are currently using Jetson to build everything from introductory robotics courses and basic autonomous vehicles to lifeguard drones and applications for monitoring aquatic diseases.
We’re on a mission to bring AI to classrooms everywhere and there’s no better way to start.
Since NVIDIA announced construction of the U.K.’s most powerful AI supercomputer — Cambridge-1 — Marc Hamilton, vice president of solutions architecture and engineering, has been (remotely) overseeing its building across the pond. The system, which will be available for U.K. healthcare researchers to work on pressing problems, is being built on NVIDIA DGX SuperPOD architecture Read article >
need help setting up pycocotools for my training. I have installed through git, pip and even conda. Been stuck on it for the past three days. When i run my main python file, i keep getting this error:
I am using
windows 10 64bits, python 3.7 Anaconda,tensorflow 2.4.1, CUDA 11.0.2 and Cudnn 8.0.2.
It’s been some time since I’ve been flirting with the idea of joining the AI developers community. I’m a 10 year experienced .Net developer and the main thing I want to use AI for is for video detection, tracking, stats, etc..
After some digging, I’ve found TensorFlow might be exactly what I’m looking for but I wanted to take some advice regarding which training I should do first..
Python? TensorFlow? Maybe start with other theoretical concepts first?