Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Steam can now show you that the framework generation has changed your game

    July 1, 2025

    Hewlett Packard Enterprise $14B acquisition of Juniper, the judiciary clears after settlement

    June 30, 2025

    Unlock performance: Accelerate Pandas operation using Polars

    June 30, 2025
    Facebook X (Twitter) Instagram
    NPP HUB
    • Home
    • Technology
    • Artificial Intelligence
    • Gadgets
    • Tech News
    Facebook X (Twitter) Instagram
    NPP HUB
    Home»Artificial Intelligence»ENCHARGE AI promises low power and accuracy in AI
    Artificial Intelligence

    ENCHARGE AI promises low power and accuracy in AI

    Daniel68By Daniel68June 2, 2025No Comments8 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email

    Princeton’s Naveen Verma’s lab is like all the ways engineers try to achieve AI super-efficiency by using simulated phenomena rather than digital computing. On a bench, every magnetic memory-based computer made of magnetic memory is the most energy-efficient. Another chip you will find with resistive memory that can calculate the largest matrix of numbers in any analog AI system.

    According to Verma. Not so charitable, this part of his lab is a cemetery.

    Simulation AI has captured Chip Architects’ imagination for years. It combines two key concepts that should make machine learning significantly less energy-intensive. First, it limits the expensive movement of locations between memory chips and processors. Second, it uses logic 1 and 0s, but uses the physics of current flow to effectively perform critical calculations of machine learning.

    Although the idea is attractive, various simulated AI solutions are not in a way that can really bite into AI’s shocking energy appetite. Verma will know. He has tried everything.

    But when IEEE spectrum Visited a year ago, there is a chip on the back of Verma Labs that represents the hope of simulating AI and the energy-efficient computing that makes AI useful and ubiquitous. The chip is not calculated by current, but is charged in the sum. This seems to be an irrelevant difference, but it may be key to overcoming noise that hinders other AI-like solutions.

    This week, Verma’s launch of Encharge AI launched its first chip based on this new building EN100. These startups claim that chips can solve various AI efforts, with performance per watt of 20 times higher than competing chips. It is designed as a single processor card that adds 200 trillion units per second to a price of 8.25 watts, designed to protect battery life in laptops with battery-capable capabilities. The most important thing is that the four chips for AI workstations have 10 million ten million cards per second.

    Current and coincidence

    In machine learning, “It turns out that the main operation we are doing is matrix multiplication due to stupid luck,” Verma said. This is basically taking a number of numbers, multiplying it by another array, and adding the result of all these multiplications. Engineers noticed a coincidence early on: the two basic rules of electrical engineering can be exactly this operation. Ohm’s law says you get the latest by multiplying voltage and conduction. Kirchoff’s current law says that if you have a bunch of wires dropping a little from a bunch of wires, then that’s what the sum of these currents is. So basically, each bunch of input voltages pushes the current through the resistance (the conductance is the reciprocal of the resistance), multiplying the voltage value, all of which sum up to produce a single value. Mathematics, done.

    Does it sound good? Well, it gets better. Much of the data that makes up a neural network is “weight”, i.e. what you multiply the input. Moving data from memory to processor logic is a large part of the cost of energy GPUs. Instead, in most analog AI solutions, the weights are stored in one of several types of nonvolatile memory as conductivity values ​​(the resistor above). Because the weight data is already performing the calculation where it needs to be, it doesn’t have to move it too much, but saves a lot of energy.

    A combination of free mathematics and fixed data is expected to calculate, which requires only one thousandth of the focus. Unfortunately, this is hardly what Anog AI strives to do.

    The current trouble

    The basic problem with any type of analog calculation has always been the signal-to-noise ratio. Analog AI has it through truck loads. In this case, the sum of all these multiplications of the signal is often overwhelmed by many possible sources of noise.

    “The problem is, semiconductor devices are messy stuff,” Verma says. Suppose you have an analog neural network where the weights are stored in the conductances in individual RRAM cells. Such weight values ​​are stored by setting relatively high voltages over a defined time period. The problem is that you can set the exact same voltage on both cells in the same time, and the two batteries will end with slightly different conductance values. Worse, these conductivity values ​​may vary with temperature.

    The difference may be small, but recall that the operation adds up many multiplications, so the noise will be amplified. To make matters worse, the resulting current becomes the voltage at the input of the next neural network, which adds even more noise.

    Researchers attacked the problem from a computer science perspective and device physics. To make up for the noise, the researchers invented a method to bake the physical knowledge of the device into its neural network model. Others focus on making equipment as predictable as possible. IBM has conducted extensive research in this field, both.

    This technology is competitive, even if not yet commercially successful in smaller systems, aims to provide low-power machine learning for devices at the edge of IoT networks. Early contestants Myth AI produced more than one generation of analog AI chips, but it competed in the field of successful low-power digital chips.

    A black circuit board with a large silver chip in the center.

    PC’s EN100 card is a new analog AI chip architecture.engraving

    Encharge’s solution strips the noise by measuring the amount of charge rather than multiple and accumulated spells of charge flow. In traditional AI-like, it depends on the relationship between voltage, conductance and current. In this new solution, it depends on the relationship between voltage, capacitance and charge – basically, charge equals the capacitance time voltage.

    Why is this difference important? It boils down to the component that is performing the multiplication. Encharge does not use some vulnerable devices (such as RRAM) to use capacitors.

    The capacitor is basically two conductors clamping the insulator. The voltage difference between the conductors causes charge to accumulate in one of them. For machine learning purposes, the key to them is their value, and the capacitance is determined by their size. (More conductor area or less space between conductors means more capacitance.)

    “The only thing they rely on is geometry, basically the space between wires,” Verma said. “This is one thing you can be very, very good in CMOS technology.” Encharge builds a series of precise and valuable capacitors in the copper interconnect layer above the silicon of its processor.

    The data that makes up most neural network models, i.e. weights, are stored in a digital storage unit, each of which is connected to a capacitor. Then, the data analyzed by the neural network is multiplied by the weight bits using simple logic built into the cell and the results are stored on the capacitor as charge. The array then switches to a mode where all charges of the breeding result accumulate and the result is digitized.

    While the original invention dates back to 2017, it was an important moment for Verma’s lab, he said the basic concept is old. “This is called switching capacitor operation; it turns out we have been doing this for decades,” he said. For example, it is used in commercial high-precision analogues of digital converters. “Our innovation is figuring out how to use it in an architecture that does in-memory computing.”

    competition

    Verma’s labs and Encharge have spent years proving that the technology is programmable and scalable and selected using an architectural and software stack that suits AI needs, which is very different from AI requirements in 2017. The resulting product has been raised $100 million with early visiting developers and recently from Samsung Venture, Foxconn and other visiting companies, as well as another early visitor to Player, which recently raised $100 million.

    However, Encharge entered a competitive field, among its competitors, the Grand Kahuna people of Nvidia. NVIDIA announced plans for PC products built around its GB10 CPU-GPU portfolio and workstations at a large developer event in March, which revolves around the upcoming GB300.

    There will be a lot of competition in low power spaces. Some of them even use the calculation form in memory. For example, d-matrix and Axelera take up a portion of the Analog AI commitment, embed memory into computations, but perform all operations digitally. Each of them has developed custom SRAM storage units that can be stored and multiplied by numbers and summed numbers. There is at least one traditional AI-like startup in the mix.

    Not surprisingly, Velma is optimistic. The new technology “means advanced, secure and personalized AI to run locally without relying on cloud infrastructure,” he said in a statement. “We hope this will fundamentally scale what you can do with AI.”

    From your website article

    Related articles around the Internet

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Daniel68
    • Website

    Related Posts

    Unlock performance: Accelerate Pandas operation using Polars

    June 30, 2025

    CTGT’s AI platform is built to eliminate bias, hallucination in AI models

    June 29, 2025

    See blood clots before the strike

    June 27, 2025

    AI-controlled robot shows unstable driving, NHTSA problem Tesla

    June 26, 2025

    Estonia’s AI Leap brings chatbots to school

    June 25, 2025

    The competition between agents and controls enterprise AI

    June 24, 2025
    Leave A Reply Cancel Reply

    Top Reviews
    8.9
    Blog

    Smart Home Décor : Technology Offers a Slew of Options

    By Daniel68
    8.9
    Blog

    Edifier W240TN Earbud Review: Fancy Specs Aren’t Everything

    By Daniel68
    8.9
    Blog

    Review: Xiaomi’s New Mobile with Hi-fi and Home Cinema System

    By Daniel68
    mmm
    Editors Picks

    Steam can now show you that the framework generation has changed your game

    July 1, 2025

    Hewlett Packard Enterprise $14B acquisition of Juniper, the judiciary clears after settlement

    June 30, 2025

    Unlock performance: Accelerate Pandas operation using Polars

    June 30, 2025

    Anker recalls five more electric banks to achieve fire risk

    June 30, 2025
    Legal Pages
    • About Us
    • Disclaimer
    • DMCA Notice
    • Privacy Policy
    Our Picks

    Steam can now show you that the framework generation has changed your game

    July 1, 2025

    Hewlett Packard Enterprise $14B acquisition of Juniper, the judiciary clears after settlement

    June 30, 2025

    Unlock performance: Accelerate Pandas operation using Polars

    June 30, 2025
    Top Reviews
    8.9

    Smart Home Décor : Technology Offers a Slew of Options

    January 15, 2021
    8.9

    Edifier W240TN Earbud Review: Fancy Specs Aren’t Everything

    January 15, 2021
    8.9

    Review: Xiaomi’s New Mobile with Hi-fi and Home Cinema System

    January 15, 2021

    Type above and press Enter to search. Press Esc to cancel.