Categories
Misc

Supercharging AI-Accelerated Cybersecurity Threat Detection

NVIDIA Morpheus, now available for download, enables you to use AI to achieve up to 1000x improved performance.

Cybercrime worldwide is costing as much as the gross domestic product of countries like Mexico or Spain, hitting more than $1 trillion annually. And global trends point to it only getting worse. 

Data centers face staggering increases in users, data, devices, and apps increasing the threat surface amid ever more sophisticated attack vectors.

Stop emerging threats

NVIDIA Morpheus enables cybersecurity developers and independent software vendors to build high-performance pipelines for security workflows with minimal development effort.

You can easily leverage the benefits of back pressure, reactive programming, and fibers to build cybersecurity solutions. The higher-level API enables you to program traditionally but gain the benefits of accelerated computing, allowing you to achieve orders of magnitude improvements in throughput. These optimizations don’t exist in any other streaming framework. Morpheus now enables building custom pipelines with Python and C++ abstraction layers.

You might typically have had to choose between writing something quickly in Python with minimal lines of code or writing something that doesn’t have the performance ceiling that Python does. With Morpheus, you get both.

You can write orders of magnitude less code and get an unbounded performance ceiling. This enables better results in less time, translating to cost savings and superior outcomes.

F5 malware detection

NVIDIA partner F5 used a Morpheus-based machine learning model for their malware detection use case. With its highly scalable, customizable, and accelerated data processing, training and inference capabilities, Morpheus enabled a 200x performance improvement to the F5 pipeline over their CPU-based implementation.

The Morpheus pipeline helps you quickly create highly performant code and workflows, which can incorporate innovative models, with minimal development friction. As a result, you extract better performance from GPUs, boosting processing of the logs required to find domain generation algorithms (DGAs).

For F5, this meant going from processing 1013 DGA logs per second to 20,833 logs per second, all with just 136 lines of code. For more information, see the Detection of DGA-based malicious domain names using real-time ML techniques F5 GTC session. 

Scaling the pipeline

Morpheus makes it easy to build and scale cybersecurity applications that harness adaptive pipelines supporting a wider range of model complexity than previously possible. Beyond just hardware acceleration, the programming model plays a critical role in performance. Morpheus uses reactive programming methods, which means that it can adapt and automatically redirect resources on the fly to any portion of the pipeline under pressure.

Figure 1. AI-Based, Real-Time Threat Detection at Scale

If part of the pipeline sees a dramatic increase in data, Morpheus can adapt and create additional paths for the data to continue processing. The depth of these buffers is monitored, and additional segments can be added as necessary. Just as easily, Morpheus removes these when they’re no longer necessary.

Using fibers, Morpheus can take work from other processes, if they’re being underused. You don’t have to spin up anything; just borrow the work available on those underused portions of the pipeline.

All this comes together to enable Morpheus to adapt intelligently to the high variability in cybersecurity data streams. It provides complete visibility into what’s happening on your network in real time and enables you to write sequential code that Morpheus scales out automatically.

With Morpheus, you can analyze up to 100% of your data in real time, for more accurate detection and faster remediation of threats as they occur. Morpheus also uses AI to adjust to threats and compensate on the fly.

Real-time fraud detection at scale

The Morpheus cybersecurity AI framework for developers is a first-of-its-kind offering for creating AI-accelerated, real-time fraud detection at massive scale.

Unleashing streaming graph neural networks (GNNs) for fraud detection, it unlocks capabilities that weren’t previously available to independent software vendors and security developers without hefty sums of labeled data.

GNNs achieve next-generation breakthroughs in fraud detection because they are uniquely designed to identify and analyze relationships between seemingly unconnected pieces of data to make predictions and do this at massive scale. It’s also why GNNs have historically been used for applications such as recommender systems and optimizing delivery routes for drivers.

Morpheus GNNs enable development on feature engineering for fraud detection with far less training data. With traditional approaches, experts identify pieces of data that are important, such as geolocation information and label them with their significance.

Because GNNs require less training data, you reduce the need for human expertise. You also enable the detection of threats that might not be otherwise recognized due to the amount of labeled training data required to train other models. Even with less data, you can improve the accuracy of fraud detection, which could potentially represent hundreds of millions of dollars to an organization.

Halt ransomware at the point of entry

Brazen global ransomware threats, like the high-profile shutdown of the Colonial Pipeline gas network, were an increased concern in 2021. Organizations are struggling to keep up with the volume and velocity of new threats. Costs of a data breach for an organization can run in the tens of millions per security breach and continue to rise.

The Morpheus AI application framework is built on NVIDIA RAPIDS and NVIDIA AI, together with NVIDIA GPUs. It enables the creation of powerful tools for implementing cybersecurity for this challenging era. When combined with the NVIDIA BlueField DPU accelerators and NVIDIA DOCA telemetry, this ushers in new standards for security development.

Diagram of input from SIEM/SOAP, app logs, cloud logs, BlueField, and converged card to Morpheus layer, on top of RAPIDS, cyber log accelerators, Triton Inference Server, and Tensor RT.
Figure 2. Morpheus architecture

Use cases for Morpheus include natural language processing (NLP) for phishing detection. Digital fingerprinting is another use case, as it analyzes the behavior of every human and machine across the enterprise to detect anomalies.

Join us at NVIDIA GTC to hear about how NVIDIA partners are integrating NVIDIA-accelerated AI with their cybersecurity solutions. NVIDIA Morpheus is open-source and available in April for download through GitHub and NGC.

For more information, see the following resources:

Categories
Misc

Model runs fine as a directory, but throws an error when used as an .h5 file: ValueError: All `axis` values to be kept must have known shape. Got axis: (-1,), input shape: [None, None], with unknown axis at index: 1

Interesting issue that I can’t quite wrap my head around.

We have a working Python project using Tensorflow to create and then use a model. This works great when we output the model as a directory, but if we output the model as an .h5 file, we run into the following error whenever we try to use the model:

ValueError: All `axis` values to be kept must have known shape. Got axis: (-1,), input shape: [None, None], with unknown axis at index: 1 

Here is how we were and how we are currently saving the model:

# this technique works (saves model to a directory) tf.keras.models.save_model( dnn_model, filepath='./true_overall', overwrite=True, include_optimizer=True, save_format=None, signatures=None, options=None, save_traces=True ) #this saves the file, but throws an error when the file is used tf.keras.models.save_model( dnn_model, filepath='./true_overall.h5', overwrite=True, include_optimizer=True, save_format=None, signatures=None, options=None, save_traces=True ) 

This is how we’re importing the model for use:

dnn_model = tf.keras.models.load_model('./neural_network/true_overall.) #works dnn_model = tf.keras.models.load_model('./neural_network/true_overall.h5') #doesn't work 

What would cause a model to work when saved as a directory but have issues when saved as an h5 file?

submitted by /u/jengl
[visit reddit] [comments]

Categories
Misc

How to convert keras tensor to numpy array?

I am working on a project in which I am using layer wise relevance propagation to get the relevances of each input. But the output of LRP is in keras tensor. Is there any way to convert it to numpy array?

submitted by /u/jer_in_
[visit reddit] [comments]

Categories
Misc

Added data augmentation through contrast and brightness with poor results, why?

Added data augmentation through contrast and brightness with poor results, why?

Hi there, I am a couple of weeks in with learning ML, and trying to get a decent image classifier. I have 60 or so labels, and only about 175-300 images each. I found that augmentation via flips and rotations suits the data and has helped bump up the accuracy a bit (maybe 7-10%).

The images have mostly white backgrounds, but some are not (greys, some darker) and this is not evenly distributed I think it was causing issues when making predictions from test photos: some incorrect labels came up frequently despite little visual similarity. I thought perhaps the background was involved as the darker background/shadows matched my photos. I figured adding contrast/brightness variation would nullify this behavior so I followed this here which adds a layer to randomize contrast and brightness to images in the training dataset. Snippet below:

 #defaults contrast_range=[0.5, 1.5], brightness_delta=[-0.2, 0.2] contrast = np.random.uniform( self.contrast_range[0], self.contrast_range[1]) brightness = np.random.uniform( self.brightness_delta[0], self.brightness_delta[1]) images = tf.image.adjust_contrast(images, contrast) images = tf.image.adjust_brightness(images, brightness) images = tf.clip_by_value(images, 0, 1) return images 

With slight adjustments to contrast and brightness. I reviewed the output and it looks exactly how I wanted it, and I figured it would at least help, but it appears to cause a trainwreck! Does this make sense? Where should I look to improve on this?

Without contrast/brightness

With contrast/brightness

As well, most tutorials focus on two labels, for 60 or so labels with 200-300 images each, in projects that deal with plants/nature/geology for example what is typically attainable for accuracy?

submitted by /u/m1g33
[visit reddit] [comments]

Categories
Misc

"kernel driver does not appear to be running on this host"

i looked the problem up but i didnt find any solutions plus the only threads i found were problems with poeple who wanted to use tensorflow with gpu. so here i post:

My situation:

i know basics in python and know a little bit about virtual environments and im using tensorflow object detection api without gpu on ubuntu 18.04

I installed the tensorflow object detection api with this anaconda guide “https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/” , tho im not sure if i activated the tensorflow environment (“conda activate tensorflow”) doing this. It worked fine and wrote various programs with spyder 5.2.3 using tensorflow and object detection.

Then i did a terrible rookie mistake and updated anaconda and i believe conda too cause i was pretty much mindlessly copying some pip commands and everything stopped working cause of a dependency chaos.

i tried with conda revisions to revert the update but it wasnt working and i tried deleting anaconda with

conda install anaconda-clean

anaconda-clean –yes

rm -rf ~/anaconda3

and uninstalling tensorflow with

pip uninstall tensorflow

and tried reinstalling the whole thing twice but since then i get the classic error or hint for not using a gpu but additionally some error message like “kernel driver does not appear to be running on this host” and UNKOWN ERROR: 303 with some luda files missing which are associated with Cuda, but i dont use cuda since i have no gpu.

does it have something to do with a virtual environment i dont use or did i not uninstall tensorflow or anaconda properly or something else.

would appreciate some help if possible

submitted by /u/Mumm13
[visit reddit] [comments]

Categories
Misc

can someone please tell me how to upload training data I’m trying to find nums 0-9 on a page

can someone please tell me how to upload training data I'm trying to find nums 0-9 on a page submitted by /u/Living-Aardvark-952
[visit reddit] [comments]
Categories
Misc

First Wave of Startups Harnesses UK’s Most Powerful Supercomputer to Power Digital Biology Breakthroughs

Four NVIDIA Inception members have been selected as the first cohort of startups to access Cambridge-1, the U.K.’s most powerful supercomputer. The system will help British companies Alchemab Therapeutics, InstaDeep, Peptone and Relation Therapeutics enable breakthroughs in digital biology. Officially launched in July, Cambridge-1 — an NVIDIA DGX SuperPOD cluster powered by NVIDIA DGX A100 Read article >

The post First Wave of Startups Harnesses UK’s Most Powerful Supercomputer to Power Digital Biology Breakthroughs appeared first on NVIDIA Blog.

Categories
Misc

NVIDIA Launches Omniverse for Developers: A Powerful and Collaborative Game Creation Environment

Enriching its game developer ecosystem, NVIDIA today announced the launch of new NVIDIA Omniverse™ features that make it easier for developers to share assets, sort asset libraries, collaborate and deploy AI to animate characters’ facial expressions in a new game development pipeline.

Categories
Misc

At GTC: NVIDIA RTX Professional Laptop GPUs Debut, New NVIDIA Studio Laptops, a Massive Omniverse Upgrade and NVIDIA Canvas Update

Digital artists and creative professionals have plenty to be excited about at NVIDIA GTC. Impressive NVIDIA Studio laptop offerings from ASUS and MSI launch with upgraded RTX GPUs, providing more options for professional content creators to elevate and expand creative possibilities. NVIDIA Omniverse gets a significant upgrade — including updates to the Omniverse Create, Machinima Read article >

The post At GTC: NVIDIA RTX Professional Laptop GPUs Debut, New NVIDIA Studio Laptops, a Massive Omniverse Upgrade and NVIDIA Canvas Update appeared first on NVIDIA Blog.

Categories
Misc

NVIDIA Omniverse Upgrade Delivers Extraordinary Benefits to 3D Content Creators

At GTC, NVIDIA announced significant updates for millions of creators using the NVIDIA Omniverse real-time 3D design collaboration platform. The announcements kicked off with updates to the Omniverse apps Create, Machinima and Showroom, with an immement View release. Powered by GeForce RTX and NVIDIA RTX GPUs, they dramatically accelerate 3D creative workflows. New Omniverse Connections Read article >

The post NVIDIA Omniverse Upgrade Delivers Extraordinary Benefits to 3D Content Creators appeared first on NVIDIA Blog.