AllAi

A list of ICs and IPs for AI, Machine Learning and Deep Learning, LLM.

View the Project on GitHub

AI

The 2023 MAD (ML/AI/Data) Landscape Awsome LLM

Latest updates


Shortcut


IC VendorsIntel, Qualcomm, Nvidia, Samsung, AMD,IBM, Marvell
Tech Giants & HPC VendorsGoogle, Amazon_AWS, Microsoft, Apple, Alibaba Group, Tencent Cloud, Baidu, Fujitsu, Nokia, Facebook, Tesla
IP VendorsARM, Synopsys, Imagination, CEVA, Cadence, VeriSilicon
Startups Cerebras, Graphcore, Tenstorrent, Blaize, Koniku, Adapteva, Mythic, BrainChip, Leepmind, Groq, Kneron, Esperanto Technologies, Gyrfalcon Technology, SambaNova Systems, GreenWaves Technology, Lightelligence, Lightmatter, Hailo,Tachyum,AlphaICs,Syntiant, aiCTX, Flex Logix, Preferred Network, Cornami, Anaflash, Optaylsys, Eta Compute, Achronix, Areanna AI, Neuroblade, Luminous Computing, Efinix, AISTORM, SiMa.ai,Untether AI, GrAI Matter Lab, Rain Neuromorphics, Applied Brain Research, XMOS, DinoPlusAI, Furiosa AI, Perceive, SimpleMachines, Neureality, Analog Inference, Quadric, EdgeQ, Innatera Nanosystems, Ceremorphic, Aspinity, TeraMem, d-Matrix

I. IC Vendors


GPU

NVIDIA Teams With Microsoft to Build Massive Cloud AI Computer

Tens of Thousands of NVIDIA GPUs, NVIDIA Quantum-2 InfiniBand and Full Stack of NVIDIA AI Software Coming to Azure; NVIDIA, Microsoft and Global Enterprises to Use Platform for Rapid, Cost-Effective AI Development and Deployment

NVIDIA Hopper Architecture In-Depth

Today during the 2022 NVIDIA GTC Keynote address, NVIDIA CEO Jensen Huang introduced the new NVIDIA H100 Tensor Core GPU based on the new NVIDIA Hopper GPU architecture. This post gives you a look inside the new H100 GPU and describes important new features of NVIDIA Hopper architecture GPUs.

NVIDIA Unveils Grace: A High-Performance Arm Server CPU For Use In Big AI Systems

Kicking off another busy Spring GPU Technology Conference for NVIDIA, this morning the graphics and accelerator designer is announcing that they are going to once again design their own Arm-based CPU/SoC. Dubbed Grace – after Grace Hopper, the computer programming pioneer and US Navy rear admiral – the CPU is NVIDIA’s latest stab at more fully vertically integrating their hardware stack by being able to offer a high-performance CPU alongside their regular GPU wares. According to NVIDIA, the chip is being designed specifically for large-scale neural network workloads, and is expected to become available in NVIDIA products in 2023.

Mobileye EyeQ

Mobileye is currently developing its fifth generation SoC, the EyeQ®5, to act as the vision central computer performing sensor fusion for Fully Autonomous Driving (Level 5) vehicles that will hit the road in 2020. To meet power consumption and performance targets, EyeQ® SoCs are designed in most advanced VLSI process technology nodes – down to 7nm FinFET in the 5th generation.

Loihi

Intel Advances Neuromorphic with Loihi 2, New Lava Software Framework and New Partners

Second-generation research chip uses pre-production Intel 4 process, grows to 1 million neurons. Intel adds open software framework to accelerate developer innovation and path to commercialization.

Habana

Intel’s Habana Labs Launches Second-Generation AI Processors for Training and Inferencing

Today at Intel Vision, Intel announced that Habana Labs, its data center team focused on AI deep learning processor technologies, launched its second-generation deep learning processors for training and inference: Habana® Gaudi®2 and Habana® Greco™. These new processors address an industry gap by providing customers with high-performance, high-efficiency deep learning compute choices for both training workloads and inference deployments in the data center while lowering the AI barrier to entry for companies of all sizes.

Habana Gaudi debuts in the Amazon EC2 cloud

The primary motivation to create this new training instance class was presented by Andy Jassy in the 2020 re:Invent: “To provide our end-customers with up to 40% better price-performance than the current generation of GPU-based instances.”

Qualcomm Ups The Snapgragon AI Game

The leader in premium mobile SoCs has applied AI across the entire platform.

Qualcomm Cloud AI 100

The Qualcomm Cloud AI 100, designed for AI inference acceleration, addresses unique requirements in the cloud, including power efficiency, scale, process node advancements, and signal processing—facilitating the ability of datacenters to run inference on the edge cloud faster and more efficiently. Qualcomm Cloud AI 100 is designed to be a leading solution for datacenters who increasingly rely on infrastructure at the edge-cloud.

Samsung Brings On-device AI Processing for Premium Mobile Devices with Exynos 9 Series 9820 Processor

Fourth-generation custom core and 2.0Gbps LTE Advanced Pro modem enables enriched mobile experiences including AR and VR applications


Samsung resently unveiled “The new Exynos 9810 brings premium features with a 2.9GHz custom CPU, an industry-first 6CA LTE modem and deep learning processing capabilities”.

The soon to be released AMD Instinct™ MI Series Accelerators

AMD Instinct™ accelerators are engineered from the ground up for this new era of data center computing, supercharging HPC and AI workloads to propel new discoveries. The AMD Instinct™ family of accelerators can deliver industry leading performance for the data center at any scale from single server solutions up to the world’s largest supercomputers.1 With new innovations in AMD CDNA™ 2 architecture, AMD Infinity Fabric™ technology and packaging technology, the latest AMD Instinct™ accelerators are designed to power discoveries at exascale, enabling scientists to tackle our most pressing challenges.

Meet the IBM Artificial Intelligence Unit

It’s our first complete system-on-chip designed to run and train deep learning models faster and more efficiently than a general-purpose CPU.

IBM Telum Processor: the next-gen microprocessor for IBM Z and IBM LinuxONE

The 7 nm microprocessor is engineered to meet the demands our clients face for gaining AI-based insights from their data without compromising response time for high volume transactional workloads.

TrueNorth is IBM's Neuromorphic CMOS ASIC developed in conjunction with the DARPA SyNAPSE program.

It is a manycore processor network on a chip design, with 4096 cores, each one simulating 256 programmable silicon "neurons" for a total of just over a million neurons. In turn, each neuron has 256 programmable "synapses" that convey the signals between them. Hence, the total number of programmable synapses is just over 268 million (228). In terms of basic building blocks, its transistor count is 5.4 billion. Since memory, computation, and communication are handled in each of the 4096 neurosynaptic cores, TrueNorth circumvents the von-Neumann-architecture bottlenecks and is very energy-efficient, consuming 70 milliwatts, about 1/10,000th the power density of conventional microprocessors. Wikipedia

AI Hardware Center

"The IBM Research AI Hardware Center is a global research hub headquartered in Albany, New York. The center is focused on enabling next-generation chips and systems that support the tremendous processing power and unprecedented speed that AI requires to realize its full potential.

Data Processing Units

Built on seven generations of the industry’s first, most scalable and widely adopted data infrastructure processors, Marvell’s OCTEON™, OCTEON™ Fusion and ARMADA® platforms are optimized for wireless infrastructure, wireline carrier networks, enterprise and cloud data centers.

II. Tech Giants & HPC Vendors


Google Tensor: Everything you need to know about the Pixel 6 chip

Google has taken the wraps off its latest Pixel smartphones and, among the changes, the one with the biggest long-term impact is the switch to in-house silicon for the search giant.

Google Launches TPU v4 AI Chips

Google CEO Sundar Pichai spoke for only one minute and 42 seconds about the company’s latest TPU v4 Tensor Processing Units during his keynote at the Google I/O virtual conference this week, but it may have been the most important and awaited news from the event.

Cloud TPU

Machine learning has produced business and research breakthroughs ranging from network security to medical diagnoses. We built the Tensor Processing Unit (TPU) in order to make it possible for anyone to achieve similar breakthroughs. Cloud TPU is the custom-designed machine learning ASIC that powers Google products like Translate, Photos, Search, Assistant, and Gmail. Here’s how you can put the TPU and machine learning to work accelerating your company’s success, especially at scale.

Edge TPU

AI is pervasive today, from consumer to enterprise applications. With the explosive growth of connected devices, combined with a demand for privacy/confidentiality, low latency, and bandwidth constraints, AI models trained in the cloud increasingly need to be run at the edge. Edge TPU is Google’s purpose-built ASIC designed to run AI at the edge. It delivers high performance in a small physical and power footprint, enabling the deployment of high-accuracy AI at the edge.

Other references are:
Google TPU3 看点

Google TPU 揭密

Google的神经网络处理器专利

脉动阵列 - 因Google TPU获得新生

Should We All Embrace Systolic Arrays?

AWS Trainium

AWS Trainium is the second custom machine learning (ML) chip designed by AWS that provides the best price performance for training deep learning models in the cloud. Trainium offers the highest performance with the most teraflops (TFLOPS) of compute power for the fastest ML training in Amazon EC2 and enables a broader set of ML applications. The Trainium chip is specifically optimized for deep learning training workloads for applications including image classification, semantic search, translation, voice recognition, natural language processing and recommendation engines.

AWS Inferentia. High performance machine learning inference chip, custom designed by AWS.

AWS Inferentia provides high throughput, low latency inference performance at an extremely low cost. Each chip provides hundreds of TOPS (tera operations per second) of inference throughput to allow complex models to make fast predictions. For even more performance, multiple AWS Inferentia chips can be used together to drive thousands of TOPS of throughput. AWS Inferentia will be available for use with Amazon SageMaker, Amazon EC2, and Amazon Elastic Inference.

Apple launches MLX machine-learning framework for Apple Silicon

Alibaba’s New AI Chip Can Process Nearly 80K Images Per Second

At the Alibaba Cloud (Aliyun) Apsara Conference 2019, Pingtouge unveiled its first AI dedicated processor for cloud-based large-scale AI inferencing. The Hanguang 800 is the first semiconductor product in Alibaba’s 20-year history.

Tencent reveals three data center chips - for AI, video transcoding, and networking

The company claims that the Zixiao AI chip is twice as good as comparable competing products, video transcoding chip Canghai was 30 percent better, and SmartNIC Xuanling was apparently four times as good. It did not provide external benchmarks or specific product details.


Baidu says 2nd-gen Kunlun AI chips enter mass production

Chinese tech giant Baidu said on Wednesday it had begun mass-producing second-generation Kunlun artificial intelligence (AI) chips, as it races to become a key player in the chip industry which Beijing is trying to strengthen.

This DLU that Fujitsu is creating is done from scratch, and it is not based on either the Sparc or ARM instruction set and, in fact, it has its own instruction set and a new data format specifically for deep learning, which were created from scratch. Japanese computing giant Fujitsu. Which knows a thing or two about making a very efficient and highly scalable system for HPC workloads, as evidenced by the K supercomputer, does not believe that the HPC and AI architectures will converge. Rather, the company is banking on the fact that these architectures will diverge and will require very specialized functions.

Nokia has developed the ReefShark chipsets for its 5G network solutions. AI is implemented in the ReefShark design for radio and embedded in the baseband to use augmented deep learning to trigger smart, rapid actions by the autonomous, cognitive network, enhancing network optimization and increasing business opportunities.

Facebook developing machine learning chip - The Information

Facebook Inc (FB.O) is developing a machine learning chip to handle tasks such as content recommendation to users, The Information reported on Thursday, citing two people familiar with the project.

Tesla’s Biggest News At AI Day Was The Dojo Supercomputer, Not The Optimus Robot

Elon Musk played AI Day to the crowd with the focus on the Optimus humanoid robot. But while this could have a huge impact on our lives and society if it does enter mass production at the price Musk suggested ($20,000), another part of the presentation will have more immediate effects. That was the status report on the Dojo supercomputer. It could really change the world much more quickly than a bipedal bot.

Tesla Dojo – Unique Packaging and Chip Design Allow An Order Magnitude Advantage Over Competing AI Hardware

Tesla hosted their AI Day and revealed the innerworkings of their software and hardware infrastructure. Part of this reveal was the previously teased Dojo AI training chip. Tesla claims their D1 Dojo chip has a GPU level compute, CPU level flexibility, with networking switch IO.

III. Traditional IP Vendors


NPU ETHOS-N78</p>

Specifically designed for inference at the edge, the ML processor gives an industry-leading performance of 4.6 TOPs, with a stunning efficiency of 3 TOPs/W for mobile devices and smart IP cameras.

ARM Details "Project Trillium" Machine Learning Processor Architecture

Arm’s second-generation, highly scalable and efficient NPU, the Ethos-N78 enables new immersive applications with a 2.5x increase in single-core performance now scalable from 1 to 10 TOP/s and beyond through many-core technologies. It provides flexibility to optimize the ML capability with 90+ configurations.

Synopsys Introduces Industry's Highest Performance Neural Processor IP

New DesignWare ARC NPX6 NPU IP Delivers Up to 3,500 TOPS Performance for Automotive, Consumer and Data Center Chip Designs

AI Processors

Whether you want smartness residing in the palm of your hand, consumer products or industrial robots, or enabled by powerful servers in the cloud, we can help you achieve your vision. We enable the smartness in your products with our PowerVR Neural Network Accelerators (NNA) and GPUs. Our NC-SDK enables seamless deployment of AI acceleration on either our hardware IP either in isolation or combined. Our NNA provides maximum efficiency with a scalable architecture which enables a wide range of smart edge and end point devices from low performance IoT to high performance RoboTaxi.

Deep learning for the real-time embedded world

One solution lies in supplying a dedicated low power AI processor for Deep Learning at the edge, combined with a deep neural network (DNN) graph compiler

Tensilica AI Platform

Vivante® NPU IP

VeriSilicon's Neural Network Processor (NPU) IP is a highly scalable, programmable computer vision and artificial intelligence processor that supports AI operations upgrades for endpoints, edge devices, and cloud devices. Designed to meet a variety of chip sizes and power budgets, the Vivante NPU IP is a cost-effective, high-quality neural network acceleration engine solution.

IV. Startups


Cerebras Unveils Andromeda, a 13.5 Million Core AI Supercomputer that Delivers Near-Perfect Linear Scaling for Large Language Models

Delivering more than 1 Exaflop of AI compute and 120 Petaflops of dense compute, Andromeda is one of the largest AI supercomputers ever built, and is dead simple to use

Cerebras Sets Record for Largest AI Models Ever Trained on Single Device

We are announcing the largest models ever trained on a single device. Using the Cerebras Software Platform (CSoft), our customers can easily train state-of-the-art GPT language models (such as GPT-3[i] and GPT-J[ii]) with up to 20 billion parameters on a single CS-2 system. Running on a single CS-2, these models take minutes to set up and users can quickly move between models with just a few keystrokes. With clusters of GPUs, this takes months of engineering work.

Cerebras Completes Series F Funding, Another $250M for $4B Valuation

The new Series F funding round nets the company another $250m in capital, bringing the total raised through venture capital up to $720 million.

Cerebras Unveils Wafer Scale Engine Two (WSE2): 2.6 Trillion Transistors, 100% Yield

Two years ago Cerebras unveiled a revolution in silicon design: a processor as big as your head, using as much area on a 12-inch wafer as a rectangular design would allow, built on 16nm, focused on both AI as well as HPC workloads. Today the company is launching its second generation product, built on TSMC 7nm, with more than double the cores and more than double of everything.

The Cerebras CS-1 computes deep learning AI problems by being bigger, bigger, and bigger than any other chip

Today, the company announced the launch of its end-user compute product, the Cerebras CS-1, and also announced its first customer of Argonne National Laboratory.

Graphcore Supercharges IPU with Wafer-on-Wafer

Graphcore unveiled its third-generation intelligence processing unit (IPU), the first processor to be built using 3D wafer-on-wafer (WoW) technology.

MK2 PERFORMANCE BENCHMARKS

Graphcore, the AI chipmaker, raises another $150M at a $1.95B valuation

Graphcore, the Bristol-based startup that designs processors specifically for artificial intelligence applications, announced it has raised another $150 million in funding for R&D and to continue bringing on new customers. It’s valuation is now $1.95 billion.

解密又一个xPU:Graphcore的IPU give some analysis on its IPU architecture.

Graphcore AI芯片:更多分析 More analysis.

深度剖析AI芯片初创公司Graphcore的IPU In-depth analysis after more information was disclosed.

Tenstorrent Raises over $200 million at $1 billion Valuation to Create Programmable, High Performance AI Computers

TORONTO, May 20, 2021 /PRNewswire/ - Tenstorrent, a hardware start-up developing next generation computers, announced today that it has raised over $200 million in a recent funding round that values the company at $1 billion. The round was led by Fidelity Management and Research Company and includes additional investments from Eclipse Ventures, Epic CG and Moore Capital.

An Interview with Tenstorrent: CEO Ljubisa Bajic and CTO Jim Keller

Automotive AI Startup Blaize Closes $71 Million Funding Round

Blaize, formerly ThinCI, has closed a Series D round of funding at $71 million. New investor Franklin Templeton and existing investor Temasek led the round, along with participation from Denso and other new and existing investors. This round brings Blaize’s total funding to around $155 million total.

Founded in 2014, Newark, California startup Koniku has taken in $1.65 million in funding so far to become “the world’s first neurocomputation company“. The idea is that since the brain is the most powerful computer ever devised, why not reverse engineer it? Simple, right? Koniku is actually integrating biological neurons onto chips and has made enough progress that they claim to have AstraZeneca as a customer. Boeing has also signed on with a letter of intent to use the technology in chemical-detecting drones.

Adapteva has taken in $5.1 million in funding from investors that include mobile giant Ericsson. The paper "Epiphany-V: A 1024 processor 64-bit RISC System-On-Chip" describes the design of Adapteva's 1024-core processor chip in 16nm FinFet technology.

The Era of Analog Compute has Arrived!

ResNet-50 in our prototype analog AI processor. Production release will support 900-1000 fps and INT8 accuracy at 3W.

Mythic launches analog AI processor that consumes 10 times less power

Analog AI processor company Mythic launched its M1076 Analog Matrix Processor today to provide low-power AI processing.

BrainChip launches neuromorphic process for AI at the edge

BrainChip today announced the commercialization of its Akida neural networking processor. Aimed at a variety of edge and internet of things (IoT) applications, BrainChip claims to be the first commercial producer of neuromorphic AI chips, which could deliver benefits in ultra-low power and performance over conventional approaches.

AI Processor Chipmaker Deep Vision Raises $35 Million in Series B Funding

Tiger Global Leads Series B Financing, Enabling Deep Vision to Expand Video Analytics and Natural Language Processing Capabilities in Edge Computing Applications

Groq

Groq Demonstrates Fast LLMs on 4-Year-Old Silicon

MOUNTAIN VIEW, CALIF. — Groq has repositioned its first-generation AI inference chip as a language processing unit (LPU), and demonstrated Meta’s Llama-2 70-billion–parameter large language model (LLM) running inference at 240 tokens per second per user. Groq CEO Jonathan Ross told EE Times that the company had Llama-2 up and running on the company’s 10-rack (64-chip) cloud-based dev system in “a couple of days.” This system is based on the company’s first gen AI silicon, released four years ago.

AI Chip Startup Groq, Founded By Ex-Googlers, Raises $300 Million To Power Autonomous Vehicles And Data Centers

Jonathan Ross left Google to launch next-generation semiconductor startup Groq in 2016. Today, the Mountain View, California-based firm said that it had raised $300 million led by Tiger Global Management and billionaire investor Dan Sundheim’s D1 Capital as it officially launched into public view.

Kneron to Accelerate Edge AI Development with more than 10 Million USD Series A Financing

According to this article, "Gyrfalcon offers Automotive AI Chip Technology"

Gyrfalcon Technology Inc. (GTI), has been promoting matrix-based application specific chips for all forms of AI since offering their production versions of AI accelerator chips in September 2017. Through the licensing of its proprietary technology, the company is confident it can help automakers bring highly competitive AI chips to production for use in vehicles within 18 months, along with significant gains in AI performance, improvements in power dissipation and cost advantages.

SambaNova unveils new AI chip to power full-stack AI platform

Today Palo-Alto-based SambaNova Systems unveiled a new AI chip, the SN40L, which will power its full-stack large language model (LLM) platform, the SambaNova Suite, that helps enterprises go from chip to model — building and deploying customized generative AI models.

SambaNova raises $676M at a $5.1B valuation to double down on cloud-based AI software for enterprises

SambaNova — a startup building AI hardware and integrated systems that run on it that only officially came out of three years in stealth last December — is announcing a huge round of funding today to take its business out into the world. The company has closed on $676 million in financing, a Series D that co-founder and CEO Rodrigo Liang has confirmed values the company at $5.1 billion.

Introducing SambaNova Systems DataScale: A New Era of Computing

SambaNova has been working closely with many organizations the past few months and has established a new state of the art in NLP. This advancement in NLP deep learning is illustrated by a GPU-crushing, world record performance result achieved on SambaNova Systems’ Dataflow-optimized system.

A New State of the Art in NLP: Beyond GPUs

SambaNova has been working closely with many organizations the past few months and has established a new state of the art in NLP. This advancement in NLP deep learning is illustrated by a GPU-crushing, world record performance result achieved on SambaNova Systems’ Dataflow-optimized system.

GreenWaves Shows Off Advanced Audio Demos

The Gap9 processor, a successor to Gap8 which targets computer vision in IoT devices, is an ultra-low power neural network processor suitable for battery-powered devices. GreenWaves’ vice president of marketing Martin Croome told EE Times Europe that the company decided to focus Gap9 on the hearables market after receiving traction from this sector for Gap8.

Optical Chip Solves Hardest Math Problems Faster than GPUs

Optical computing startup Lightelligence has demonstrated a silicon photonics accelerator running the Ising problem more than 100 times faster than a typical GPU setup.

Lightmatter Raises More Funding for Photonic AI Chip

ightmatter, the MIT spinout building AI accelerators with a silicon photonics computing engine, announced a Series B funding round, raising an additional $80 million. The company’s technology is based on proprietary silicon photonics technology which manipulates coherent light inside a chip to perform calculations very quickly while using very little power

‘Unicorn’ AI Chipmaker Hailo Raises $136 Million

Israeli AI chip startup Hailo has raised $136 million in a Series C funding round, bringing the company’s total to $224 million. The company has also reportedly reached “unicorn” status.

Tachyum Launches Prodigy Universal Processor

May 11, 2021 — Tachyum today launched the world’s first universal processor, Prodigy, which unifies the functionality of a CPU, GPU and TPU in a single processor, creating a homogeneous architecture, while delivering massive performance improvements at a cost many times less than competing products

AlphaICs Begins Sampling Its Deep Learning Co-Processor

AlphaICs, a startup developing edge AI and learning silicon aimed at smart vision applications, is sampling its deep learning co-processor, Gluon, that also comes with a software development kit.

Syntiant: Analog Deep Learning Chips

Startup Syntiant Corp. is an Irvine, Calif. semiconductor company led by former top Broadcom engineers with experience in both innovative design and in producing chips designed to be produced in the billions, according to company CEO Kurt Busch.

Baidu Backs Neuromorphic IC Developer

MUNICH — Swiss startup aiCTX has closed a $1.5 million pre-A funding round from Baidu Ventures to develop commercial applications for its low-power neuromorphic computing and processor designs and enable what it calls “neuromorphic intelligence.” It is targeting low-power edge-computing embedded sensory processing systems.

Flex Logix has two paths to making a lot of money challenging Nvidia in AI

The programmable chip company scores $55 million in venture backing, bringing its total haul to $82 million

Preferred Networks develops a custom deep learning processor MN-Core for use in MN-3, a new large-scale cluster, in spring 2020

Dec. 12, 2018, Tokyo Japan – Preferred Networks, Inc. (“PFN”, Head Office: Tokyo, President & CEO: Toru Nishikawa) announces that it is developing MN-Core (TM), a processor dedicated to deep learning and will exhibit this independently developed hardware for deep learning, including the MN-Core chip, board, and server, at the SEMICON Japan 2018, held at Tokyo Big Site.

AI Startup Cornami reveals details of neural net chip

Stealth startup Cornami on Thursday revealed some details of its novel approach to chip design to run neural networks. CTO Paul Masters says the chip will finally realize the best aspects of a technology first seen in the 1970s.

AI chip startup offers new edge computing solution

Anaflash Inc. (San Jose, CA) is a startup company that has developed a test chip to demonstrate analog neurocomputing taking place inside logic-compatible embedded flash memory.

Optalysys launches world’s first commercial optical processing system, the FT:X 2000

Optalysys develops Optical Co-processing technology which enables new levels of processing capability delivered with a vastly reduced energy consumption compared with conventional computers. Its first coprocessor is based on an established diffractive optical approach that uses the photons of low-power laser light instead of conventional electricity and its electrons. This inherently parallel technology is highly scalable and is the new paradigm of computing.

Low-Power AI Startup Eta Compute Delivers First Commercial Chips

The firm pivoted away from riskier spiking neural networks using a new power management scheme

Eta Compute Debuts Spiking Neural Network Chip for Edge AI

Chip can learn on its own and inference at 100-microwatt scale, says company at Arm TechCon.

Achronix Rolls 7-nm FPGAs for AI

Achronix is back in the game of providing full-fledged FPGAs with a new high-end 7-nm family, joining the Gold Rush of silicon to accelerate deep learning. It aims to leverage novel design of its AI block, a new on-chip network, and use of GDDR6 memory to provide similar performance at a lower cost than larger rivals Intel and Xilinx.

Startup Runs AI in Novel SRAM

Areanna is the latest example of an explosion of new architectures spawned by the rise of deep learning. The debut of a whole new approach to computing has fired imaginations of engineers around the industry hoping to be the next Hewlett and Packard.

NeuroBlade Preps Inference Chip

Add NeuroBlade to the dozens of startups working on AI silicon. The Israeli company just closed a $23 million Series A, led by the founder of Check Point Software and with participation from Intel Capital.

Bill Gates just backed a chip startup that uses light to turbocharge AI

Luminous Computing has developed an optical microchip that runs AI models much faster than other semiconductors while using less power.

Chip startup Efinix hopes to bootstrap AI efforts in IoT

Six-year-old startup Efinix has created an intriguing twist on the FPGA technology dominated by Intel and Xiliinx; the company hopes its energy-efficient chips will bootstrap the market for embedded AI in the Internet of Things.

AIStorm raises $13.2 million for AI edge computing chips

David Schie, a former senior executive at Maxim, Micrel, and Semtech, thinks both markets are ripe for disruption. He — along with WSI, Toshiba, and Arm veterans Robert Barker, Andreas Sibrai, and Cesar Matias — in 2011 cofounded AIStorm, a San Jose-based artificial intelligence (AI) startup that develops chipsets that can directly process data from wearables, handsets, automotive devices, smart speakers, and other internet of things (IoT) devices.

SiMa.ai Raises $30 Million in Series A Investment Round Led by Dell Technologies Capital

SAN JOSE, Calif.--(BUSINESS WIRE)--SiMa.ai, the company enabling high performance machine learning to go green, today announced its Machine Learning SoC (MLSoC) platform – the industry’s first unified solution to support traditional compute with high performance, lowest power, safe and secure machine learning inference. Delivering the highest frames per second per watt, SiMa.ai’s MLSoC is the first machine learning platform to break the 1000 FPS/W barrier for ResNet-501. In customer engagements, the company has demonstrated 10-30x improvement in FPS/W through its automated software flow across a wide range of embedded edge applications, over today’s competing solutions. The platform will provide machine learning solutions that range from 50 TOPs@5W to 200 TOPs@20W, delivering an industry first of 10 TOPs/W for high performance inference.

SiMa.ai™ Introduces MLSoC™ – First Machine Learning Platform to Break 1000 FPS/W Barrier with 10-30x Improvement over Alternative Solutions

SiMa.ai, the company enabling high performance machine learning to go green, today announced its Machine Learning SoC (MLSoC) platform – the industry’s first unified solution to support traditional compute with high performance, lowest power, safe and secure machine learning inference. Delivering the highest frames per second per watt, SiMa.ai’s MLSoC is the first machine learning platform to break the 1000 FPS/W barrier for ResNet-501. In customer engagements, the company has demonstrated 10-30x improvement in FPS/W through its automated software flow across a wide range of embedded edge applications, over today’s competing solutions. The platform will provide machine learning solutions that range from 50 TOPs@5W to 200 TOPs@20W, delivering an industry first of 10 TOPs/W for high performance inference.

Untether AI nabs $125M for AI acceleration chips

Untether AI, a startup developing custom-built chips for AI inferencing workloads, today announced it has raised $125 million from Tracker Capital Management and Intel Capital. The round, which was oversubscribed and included participation from Canada Pension Plan Investment Board and Radical Ventures, will be used to support customer expansion.

GrAI Matter Labs Reveals NeuronFlow Technology and Announces GrAIFlow SDK

GrAI Matter Labs (aka GML), a neuromorphic computing pioneer today revealed NeuronFlow – a new programmable processor technology – and announced an early access program to its GrAIFlow software development kit.

Rain Neuromorphics on Crunchbase

We build artificial intelligence processors, inspired by the brain. Our mission is to enable brain-scale intelligence.

Applied Brain Research on Crunchbase

ABR makes the world's most advanced neuromoprhic compiler, runtime and libraries for the emerging space of neuromorphic computing.

XMOS adapts Xcore into AIoT ‘crossover processor’

EE Times exclusive! The new chip targets AI-powered voice interfaces in IoT devices — “the most important AI workload at the endpoint.”

XMOS unveils Xcore.ai, a powerful chip designed for AI processing at the edge

The latest xcore.ai is a crossover chip designed to deliver high-performance AI, digital signal processing, control, and input/output in a single device with prices from $1.

We design and produce AI processors and the software to run them in data centers. Our unique approach optimizes for inference with the focus on performance, power efficiency, and ease of use; and at the same time our approach enables cost-effective training.

We build high-performance AI inference coprocessors that can be seamlessly integrated into various computing platforms including data centers, servers, desktops, automobiles and robots.

Corerain provides ultra-high performance AI acceleration chips and the world's first streaming engine-based AI development platform.

Perceive emerges from stealth with Ergo edge AI chip

On-device computing solutions startup Perceive emerged from stealth today with its first product: the Ergo edge processor for AI inference. CEO Steve Teig claims the chip, which is designed for consumer devices like security cameras, connected appliances, and mobile phones, delivers “breakthrough” accuracy and performance in its class.

SimpleMachines, Inc. Debuts First-of-its-Kind High Performance Chip

As traditional chip makers struggle to embrace the challenges presented by the rapidly evolving AI software landscape, a San Jose startup has announced it has working silicon and a whole new future-proof chip paradigm to address these issues. The SimpleMachines, Inc. (SMI) team – which includes leading research scientists and industry heavyweights formerly of Qualcomm, Intel and Sun Microsystems – has created a first-of-its-kind easily programmable, high-performance chip that will accelerate a wide variety of AI and machine-learning applications.

NeuReality lands $35M to bring AI accelerator chips to market

NeuReality, a startup developing AI inferencing accelerator chips, has raised $35 million in new venture capital.

NeuReality unveiled NR1-P, A novel AI-centric inference platform

NeuReality has unveiled NR1-P, a novel AI-centric inference platform. NeuReality has already started demonstrating its AI-centric platform to customers and partners. NeuReality has redefined today’s outdated AI system architecture by developing an AI-centric inference platform based on a new type of System-on-Chip (SoC).

NeuReality raises $8M for its novel AI inferencing platform

NeuReality, an Israeli AI hardware startup that is working on a novel approach to improving AI inferencing platforms by doing away with the current CPU-centric model, is coming out of stealth today and announcing an $8 million seed round.

Analog inference startup raises $10.6 million

The company is backed by Khosla Ventures and is developing its first generation of products for AI computing at the edge. The company raised $4.5 million shortly after its formation in March 2018, so the latest tranche brings the total raised to-date to $15.1 million

Quadric Announces Unified Silicon and Software Platform Optimized for On-Device AI

BURLINGAME, Calif., June 22, 2021 — Quadric (quadric.io), an innovator in high-performance edge processing, has introduced a unified silicon and software platform that unlocks the power of on-device AI.

EdgeQ reveals more details behind its next-gen 5G/AI chip

5G is the current revolution in wireless technology, and every chip company old and new is trying to burrow their way into this ultra-competitive — but extremely lucrative — market. One of the most interesting new players in the space is EdgeQ, a startup with a strong technical pedigree via Qualcomm that we covered last year after it raised a nearly $40 million Series A.

Innatera Unveils Neuromorphic AI Chip to Accelerate Spiking Networks

Innatera, the Dutch startup making neuromorphic AI accelerators for spiking neural networks, has produced its first chips, gauged their performance, and revealed details of their architecture.

Redpine Founder Launches AI Processor Startup

Ceremorphic, an AI chip startup emerging from stealth mode this week, is readying a heterogeneous AI processor aimed at model training in data centers, automotive, high-performance computing, robotics and other emerging applications.

Aspinity Analog ML Chip Allows Battery-Powered “Always On”

Machine learning (ML) is all about massive amounts of processing, DSP, etc., right? Maybe not, according to the team at Aspinity. The company continues to push ahead on the analog front. The latest member of the company’s analogML family, the AML100, operates completely in the analog domain. As a result, it can reduce always-on system power by 95% (for the record, we had to walk through this a couple of times before I believed them).

TetraMem enjoyed an exciting public debut of our analog in-memory compute technology at the Linley Spring 2022 Processor Conference.

Exclusive: AI chip startup d-Matrix raises $110 million with backing from Microsoft

Sept 6 (Reuters) - Silicon Valley-based artificial intelligence chip startup d-Matrix has raised $110 million from investors that include Microsoft Corp (MSFT.O) at a time when many chip companies are struggling to raise cash.

D-Matrix AI chip promises efficient transformer processing

The startup combines digital in-memory compute and chiplet implementations for data-center-grade inference.

AI Chip Compilers


1. pytorch/glow
2. TVM:End to End Deep Learning Compiler Stack
3. Google Tensorflow XLA
4. Nvidia TensorRT
5. PlaidML
6. nGraph
7. MIT Tiramisu compiler
8. ONNC (Open Neural Network Compiler)
9. MLIR: Multi-Level Intermediate Representation
10. The Tensor Algebra Compiler (taco)
11. Tensor Comprehensions
12. PolyMage Labs
13. OctoML
14. Modular AI

AI Chip Benchmarks


1. DAWNBench:An End-to-End Deep Learning Benchmark and Competition Image Classification (ImageNet)
2. Fathom:Reference workloads for modern deep learning methods
3. MLPerf:A broad ML benchmark suite for measuring performance of ML software frameworks, ML hardware accelerators, and ML cloud platforms. You can find latest MLPerf results: training 2.1, HPC 2.0, inference tiny 1.0 here..
You can find MLPerf inference results v2.1 here..
You can find MLPerf training results v1.0 here..
4. AI Matrix
5. AI-Benchmark
6. AIIABenchmark
7. EEMBC MLMark Benchmark

Reference


1. FPGAs and AI processors: DNN and CNN for all
2. 12 AI Hardware Startups Building New AI Chips
3. Tutorial on Hardware Architectures for Deep Neural Networks
4. Neural Network Accelerator Comparison
5. "White Paper on AI Chip Technologies 2018". You can download it from here, or Google drive.
5. "What We Talk About When We Talk About AI Chip". #1, #2, #3, #4
6. AI Chip Paper List
7. TPU vs GPU vs Cerebras vs Graphcore: A Fair Comparison between ML Hardware
laptop