armnn 20.08-12 source package in Ubuntu

Changelog

armnn (20.08-12) unstable; urgency=medium

  * Stop building libarmnn-cpuacc-backend22 on armhf. The package requires
    Neon support in order to be built, and there is no guarantee that all
    armhf CPUs have it. Importantly, the armhf buildd conova does not support
    Neon.

 -- Emanuele Rocca <email address hidden>  Thu, 02 Feb 2023 16:02:27 +0100

Upload details

Uploaded by:
Francis Murtagh
Uploaded to:
Sid
Original maintainer:
Francis Murtagh
Architectures:
amd64 arm64 armhf i386 mipsel mips64el ppc64el
Section:
misc
Urgency:
Medium Urgency

See full publishing history Publishing

Series Pocket Published Component Section
Lunar release universe misc

Downloads

File Size SHA-256 Checksum
armnn_20.08-12.dsc 3.1 KiB c8cdab62cf9eeadff7c8c28abf9c23fbb7e38c28f591032c1fa09fdc7e32c6b3
armnn_20.08.orig.tar.xz 4.3 MiB e834f4ed5ed138ea6c66ea37ec11208af9803271656be16abd426f74287d1189
armnn_20.08-12.debian.tar.xz 19.5 KiB e2396a6ed697e3bd87247dbdf791c83f44cf0b168519bade00a768334eb1b3e5

No changes file available.

Binary packages built by this source

libarmnn-cpuacc-backend22: Arm NN is an inference engine for CPUs, GPUs and NPUs

 Arm NN is a set of tools that enables machine learning workloads on
 any hardware. It provides a bridge between existing neural network
 frameworks and whatever hardware is available and supported. On arm
 architectures (arm64 and armhf) it utilizes the Arm Compute Library
 to target Cortex-A CPUs, Mali GPUs and Ethos NPUs as efficiently as
 possible. On other architectures/hardware it falls back to unoptimised
 functions.
 .
 This release supports Caffe, TensorFlow, TensorFlow Lite, and ONNX.
 Arm NN takes networks from these frameworks, translates them
 to the internal Arm NN format and then through the Arm Compute Library,
 deploys them efficiently on Cortex-A CPUs, and, if present, Mali GPUs.
 .
 This is the dynamically loadable Neon backend package.

libarmnn-cpuacc-backend22-dbgsym: No summary available for libarmnn-cpuacc-backend22-dbgsym in ubuntu mantic.

No description available for libarmnn-cpuacc-backend22-dbgsym in ubuntu mantic.

libarmnn-cpuref-backend22: No summary available for libarmnn-cpuref-backend22 in ubuntu mantic.

No description available for libarmnn-cpuref-backend22 in ubuntu mantic.

libarmnn-cpuref-backend22-dbgsym: debug symbols for libarmnn-cpuref-backend22
libarmnn-dev: Arm NN is an inference engine for CPUs, GPUs and NPUs

 Arm NN is a set of tools that enables machine learning workloads on
 any hardware. It provides a bridge between existing neural network
 frameworks and whatever hardware is available and supported. On arm
 architectures (arm64 and armhf) it utilizes the Arm Compute Library
 to target Cortex-A CPUs, Mali GPUs and Ethos NPUs as efficiently as
 possible. On other architectures/hardware it falls back to unoptimised
 functions.
 .
 This release supports Caffe, TensorFlow, TensorFlow Lite, and ONNX.
 Arm NN takes networks from these frameworks, translates them
 to the internal Arm NN format and then through the Arm Compute Library,
 deploys them efficiently on Cortex-A CPUs, and, if present, Mali GPUs.
 .
 This is the development package containing header files.

libarmnn-gpuacc-backend22: Arm NN is an inference engine for CPUs, GPUs and NPUs

 Arm NN is a set of tools that enables machine learning workloads on
 any hardware. It provides a bridge between existing neural network
 frameworks and whatever hardware is available and supported. On arm
 architectures (arm64 and armhf) it utilizes the Arm Compute Library
 to target Cortex-A CPUs, Mali GPUs and Ethos NPUs as efficiently as
 possible. On other architectures/hardware it falls back to unoptimised
 functions.
 .
 This release supports Caffe, TensorFlow, TensorFlow Lite, and ONNX.
 Arm NN takes networks from these frameworks, translates them
 to the internal Arm NN format and then through the Arm Compute Library,
 deploys them efficiently on Cortex-A CPUs, and, if present, Mali GPUs.
 .
 This is the dynamically loadable CL backend package.

libarmnn-gpuacc-backend22-dbgsym: No summary available for libarmnn-gpuacc-backend22-dbgsym in ubuntu mantic.

No description available for libarmnn-gpuacc-backend22-dbgsym in ubuntu mantic.

libarmnn22: No summary available for libarmnn22 in ubuntu mantic.

No description available for libarmnn22 in ubuntu mantic.

libarmnn22-dbgsym: debug symbols for libarmnn22
libarmnnaclcommon22: No summary available for libarmnnaclcommon22 in ubuntu mantic.

No description available for libarmnnaclcommon22 in ubuntu mantic.

libarmnnaclcommon22-dbgsym: No summary available for libarmnnaclcommon22-dbgsym in ubuntu mantic.

No description available for libarmnnaclcommon22-dbgsym in ubuntu mantic.

libarmnntfliteparser-dev: Arm NN TensorFlow Lite parser library - header files

 Arm NN is a set of tools that enables machine learning workloads on
 any hardware. It provides a bridge between existing neural network
 frameworks and whatever hardware is available and supported. On arm
 architectures (arm64 and armhf) it utilizes the Arm Compute Library
 to target Cortex-A CPUs, Mali GPUs and Ethos NPUs as efficiently as
 possible. On other architectures/hardware it falls back to unoptimised
 functions.
 .
 This release supports Caffe, TensorFlow, TensorFlow Lite, and ONNX.
 Arm NN takes networks from these frameworks, translates them
 to the internal Arm NN format and then through the Arm Compute Library,
 deploys them efficiently on Cortex-A CPUs, and, if present, Mali GPUs.
 .
 This is the development package containing header files.

libarmnntfliteparser22: Arm NN is an inference engine for CPUs, GPUs and NPUs

 Arm NN is a set of tools that enables machine learning workloads on
 any hardware. It provides a bridge between existing neural network
 frameworks and whatever hardware is available and supported. On arm
 architectures (arm64 and armhf) it utilizes the Arm Compute Library
 to target Cortex-A CPUs, Mali GPUs and Ethos NPUs as efficiently as
 possible. On other architectures/hardware it falls back to unoptimised
 functions.
 .
 This release supports Caffe, TensorFlow, TensorFlow Lite, and ONNX.
 Arm NN takes networks from these frameworks, translates them
 to the internal Arm NN format and then through the Arm Compute Library,
 deploys them efficiently on Cortex-A CPUs, and, if present, Mali GPUs.
 .
 This is the shared library package.

libarmnntfliteparser22-dbgsym: debug symbols for libarmnntfliteparser22
python3-pyarmnn: PyArmNN is a python extension for the Armnn SDK

 PyArmNN provides interface similar to Arm NN C++ Api.
 .
 PyArmNN is built around public headers from the armnn/include folder
 of Arm NN. PyArmNN does not implement any computation kernels itself,
 all operations are delegated to the Arm NN library.

python3-pyarmnn-dbgsym: debug symbols for python3-pyarmnn