You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# Model Zoo for Intel® Architecture
2
2
3
-
This repository contains **links to pre-trained models, sample scripts, best practices, and step-by-step tutorials** for many popular open-source machine learning models optimized by Intel to run on Intel® Xeon® Scalable processors.
3
+
This repository contains **links to pre-trained models, sample scripts, best practices, and step-by-step tutorials** for many popular open-source machine learning models optimized by Intel to run on Intel® Xeon® Scalable processors and Intel® Data Center GPUs.
4
4
5
5
Model packages and containers for running the Model Zoo's workloads can be found at the [Intel® Developer Catalog](https://software.intel.com/containers).
# Model Zoo for Intel® Architecture Workloads Optimized for the Intel® Data Center GPU Flex Series
2
+
3
+
This document provides links to step-by-step instructions on how to leverage Model Zoo docker containers to run optimized open-source Deep Learning inference workloads using Intel® Extension for PyTorch* and Intel® Extension for TensorFlow* on the [Intel® Data Center GPU Flex Series](https://www.intel.com/content/www/us/en/products/docs/discrete-gpus/data-center-gpu/flex-series/overview.html).
The ResNet50 v1-5 inference container includes scripts,model and libraries need to run int8 inference. To run the `inference_block_format.sh` quickstart script using this container, you'll need to provide volume mounts for the ImageNet dataset. You will need to provide an output directory where log files will be written.
58
+
59
+
```
60
+
export PRECISION=int8
61
+
export OUTPUT_DIR=<path to output directory>
62
+
export DATASET_DIR=<path to the preprocessed imagenet dataset>
Support for Intel® Extension for PyTorch* is found via the [Intel® AI Analytics Toolkit.](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-analytics-toolkit.html#gs.qbretz) Additionally, the Intel® Extension for PyTorch* team tracks both bugs and enhancement requests using [GitHub issues](https://github.com/intel/intel-extension-for-pytorch/issues). Before submitting a suggestion or bug report, please search the GitHub issues to see if your issue has already been reported.
99
+
100
+
## License Agreement
101
+
102
+
LEGAL NOTICE: By accessing, downloading or using this software and any required dependent software (the “Software Package”), you agree to the terms and conditions of the software license agreements for the Software Package, which may also include notices, disclaimers, or license terms for third party software included with the Software Package. Please refer to the [license file](https://github.com/IntelAI/models/tree/master/third_party) for additional details.
Copy file name to clipboardExpand all lines: quickstart/image_recognition/tensorflow/resnet50v1_5/inference/gpu/devcatalog.md
+36-21
Original file line number
Diff line number
Diff line change
@@ -1,56 +1,65 @@
1
-
# ResNet50 v1.5 Inference
1
+
# Running ResNet50 v1.5 Inference with Int8 on Intel® Data Center GPU Flex Series using Intel® Extension for TensorFlow*
2
2
3
-
## Description
3
+
## Overview
4
4
5
-
This document has instructions for running ResNet50 v1.5 inference using
6
-
Intel(R) Extension for TensorFlow* with Intel(R) Data Center GPU Flex Series.
5
+
This document has instructions for running ResNet50 v1.5 inference using Intel(R) Extension for TensorFlow* with Intel(R) Data Center GPU Flex Series.
7
6
8
-
## Datasets
7
+
8
+
## Requirements
9
+
| Item | Detail |
10
+
| ------ | ------- |
11
+
| Host machine | Intel® Data Center GPU Flex Series |
12
+
| Drivers | GPU-compatible drivers need to be installed: [Download Driver 476.14](https://dgpu-docs.intel.com/releases/stable_476_14_20221021.html)
13
+
| Software | Docker* Installed |
14
+
15
+
## Get Started
16
+
17
+
### Download Datasets
9
18
10
19
Download and preprocess the ImageNet dataset using the [instructions here](https://github.com/IntelAI/models/blob/master/datasets/imagenet/README.md).
11
20
After running the conversion script you should have a directory with the
12
21
ImageNet dataset in the TF records format.
13
22
14
23
Set the `DATASET_DIR` to point to the TF records directory when running ResNet50 v1.5.
15
24
16
-
## Quick Start Scripts
25
+
###Quick Start Scripts
17
26
18
27
| Script name | Description |
19
28
|:-------------:|:-------------:|
20
-
|`online_inference`| Runs online inference for int8 precision |
29
+
|`online_inference`| Runs online inference for int8 precision |
21
30
|`batch_inference`| Runs batch inference for int8 precision |
22
31
|`accuracy`| Measures the model accuracy for int8 precision |
23
32
24
-
## Docker
25
33
26
-
Requirements:
27
-
* Host machine has Intel(R) Data Center GPU Flex Series
28
-
* Follow instructions to install GPU-compatible driver [419.40](https://dgpu-docs.intel.com/releases/stable_419_40_20220914.html)
The ResNet50 v1-5 inference container includes scripts,model and libraries need to run int8 inference. To run one of the inference quickstart scripts using this container, you'll need to provide volume mounts for the ImageNet dataset for running `accuracy.sh` script. For `online_inference.sh` and `batch_inference.sh` dummy dataset will be used. You will need to provide an output directory where log files will be written.
42
+
### Run Docker Image
43
+
The ResNet50 v1-5 inference container includes scripts,model and libraries need to run int8 inference. To run one of the inference quickstart scripts using this container, you'll need to provide volume mounts for the ImageNet dataset for running `accuracy.sh` script. For `online_inference.sh` and `batch_inference.sh` dummy dataset will be used. You will need to provide an output directory where log files will be written.
38
44
39
45
```
40
46
export PRECISION=int8
41
47
export OUTPUT_DIR=<path to output directory>
42
48
export DATASET_DIR=<path to the preprocessed imagenet dataset>
Support for Intel® Extension for TensorFlow* is found via the [Intel® AI Analytics Toolkit.](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-analytics-toolkit.html#gs.qbretz) Additionally, the Intel® Extension for TensorFlow* team tracks both bugs and enhancement requests using [GitHub issues](https://github.com/intel/intel-extension-for-tensorflow/issues). Before submitting a suggestion or bug report, please search the GitHub issues to see if your issue has already been reported.
Support for Intel® Extension for PyTorch* is found via the [Intel® AI Analytics Toolkit.](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-analytics-toolkit.html#gs.qbretz) Additionally, the Intel® Extension for PyTorch* team tracks both bugs and enhancement requests using [GitHub issues](https://github.com/intel/intel-extension-for-pytorch/issues). Before submitting a suggestion or bug report, please search the GitHub issues to see if your issue has already been reported.
0 commit comments