NewsDesigning AI: The New Frontier of Engineering

Designing AI: The New Frontier of Engineering

Category articles

Artificial intelligence (AI), is infiltrating almost every industry and discipline. AI is everywhere, from applications like Google Maps, to factory automation, to parallel parking cars.

AI could be the norm for all systems that we use and interact with. This means that AI will become the new norm for system designers looking to create modern, competitive products. However, AI presents design challenges for engineers who are trained in analog or digital logic design. There are libraries, devices, education, and kits that can help you accelerate your learning curve.

Divergence in Design Techniques and Terminologies

Classically trained designers use a top-down approach when designing systems. However, AI design can be viewed using a double diamond method. Goals and objectives are set in the first diamond (Discover & Define). The second diamond (Develop and Deliver) can be used to generate ideas and find solutions that work.

Credit: ximnet.medium.com

AI uses the double-diamond approach to design. It helps identify goals and objectives, and then finds solutions that work. Two diamonds make up the double-diamond method. The first diamond, “Discover and Define”, is responsible for identifying the type of AI that will be used. Goals and objectives are also set. The second diamond, “Develop and Deliver”, focuses on the generation of ideas and the development of solutions that work. This approach is different than the traditional top-down approach in engineering design. This double-diamond approach emphasizes the importance iterative design as well as feedback loops when creating AI systems that are effective.

First, identify the type and purpose of AI you want to create. AI Narrow Intelligence is a type of AI that learns and executes specific tasks. AI General Intelligence will be the future of AI. Machines are capable of thinking, reasoning, learning, and acting independently.

Machine learning is a subset in AI that allows machines learn rather than being instructed. Google defines machine-learning as a subfield within artificial intelligence. It is a set of techniques and methods that allow computers to learn without having to follow super-specific rules. AI that is digitally and software-based can be described as a combination of statistical algorithms that find patterns in large amounts of data and then use those patterns to make predictions. This technique is used by service providers and advertisers to suggest things to watch or products you might be interested in purchasing.

Machine learning (ML), a branch in artificial intelligence (AI), involves the use statistical algorithms and computational modeling to allow computers to learn from data without having to be programmed. Machine learning is a broad field in electronics that can be used to perform a variety of tasks, such as predictive maintenance and fault detection, process optimization, quality control, and process optimization.

Predictive maintenance is one of the most important applications of machine learning in electronics. Predictive maintenance uses data analytics to track the performance of equipment to detect potential problems before they happen. Machine learning algorithms can be used to analyze historical data to detect patterns and anomalies in equipment performance that could indicate future problems. This allows technicians to correct equipment problems before they occur, which reduces downtime and costs.

Fault detection is another important use of machine learning in electronics. Fault detection, which is used to detect when a system isn’t working correctly, is a common application of machine learning in electronics. It is commonly used in manufacturing processes to identify defective products. Machine learning algorithms can be used for analyzing data about product performance and identifying possible defects. This helps to reduce the number of products with defects and can be used to detect problems early in the manufacturing process.

For process optimization in electronics manufacturing, machine learning can be used. Machine learning algorithms analyze data from production processes to identify areas that can be improved. These include reducing waste or increasing production efficiency. This can improve the quality and reduce costs.

Different types of AI Learning

Machine learning is a method of creating solutions that are accurate and maintain high levels of precision. There are four methods of learning how to find and apply patterns: Supervised (unsupervised), Reinforced (reinforced), and Deep learning.

Supervised learning: A machine can provide labeled data that tells it what to search for. The model then learns to recognize patterns in unlabeled or untrained data sets. Regression algorithms and classification algorithms can be used to predict output values using input data features. This allows you to train a model with feature training. These algorithms can be run on standard digital hardware or processors that are available to designers.

Unsupervised Learning: Unsupervised learning does not label input data. The machine just looks for patterns in the data. This takes less human resource, but it can take longer to get the results you want. It can also uncover new and unexpected things. You can use standard hardware and coding techniques to implement the dimension reduction and clustering algorithms, which are often used in unsupervised learning machines.

Reinforced Learning is an exciting way to learn machine learning. To seed the learning process, no learning data is given. Instead, processor algorithms are put to the test to learn through trial and error. A system of rewards or penalties can be used to guide the algorithm’s development towards desired behavior, similar to physiological behavior modification techniques.

Deep learning: This is the most popular tool for AI, and it’s the best. This method is unique because it employs a complex structure for neural networks. Multiple processing layers are used by neural networks to extract higher-level trends and features. A neuron is a layer that determines whether to send or block a signal to the next level.

Deep learning mimics how humans acquire certain types of knowledge. Deep learning excels at processing large amounts of data in a sequential manner and can be used to solve the most difficult problems like facial recognition, medical diagnosis and language translations. It is also useful for self-driving vehicles and navigating on roads. McKinsey Analytics found that deep learning had a 41 percent, 27 per cent, and 25 percent lower error rates1 for complex tasks such as image classification, facial recognition and voice recognition.

A New Type of Logic: Mastering

Instead of logic gates which look for AND, OR and NOT functionality, neural network can be viewed as a logic system that uses both majority and minority gates. When most inputs are operational, a majority gate will produce an active output. An active state will be displayed by a minority gate when only a few inputs are active.

These types of logic elements can be combined to form a neural clump that can be used to filter large data sets and detect patterns. Deep learning is a function of the human brain. It takes in large amounts of sensory information and processes it to filter it into an actual-time useable form.

Standard digital circuit elements can be used to implement reinforced, supervised and unsupervised learning systems. However, they require unlimited memory and storage, as well as high-speed communication in and out. This approach is not suitable for deep learning. It is difficult to implement exponentially more complex systems by designing complex systems at the gate level. Although multiple processor cores can be used to create a virtual majority- and minority-based multi-level network with multiple processor cores, any simulation will always be slower than a hardware-based system. A real deep learning machine will require highly flexible, dense, and agile neural network systems and chips.

The Forefront

This is a topic that many are working on with different approaches. One example is thin film arrays made of resistive synapses that are read-only. This allows for a programmed network to contain data and knowledge. An array of programmable amplifiers and synapses that can be used as electric neurons is another approach.

IC manufacturers such as Intel(r), Qualcomm and IBM offer or plan on offering highly developed neural network chips, IP and development systems.

There are both digital multicore and neural synaptic multilevel architectures being used. Even digital self-learning chips are available, such as the Loihi 2 by Intel. This is a neuromorphic device that calls itself a self-learning chip and is the first of its kind. Nvidia is developing faster GPU clusters called GPX-2. These use non-volatile switches and are said to deliver two petaflops in performance.

This article is too long to cover all AI and deep-learning technologies and devices. However, there are many ways to get started. Mouser distributes single-board computers as well as modules. These kits can be supported by multiple AI IPs and libraries that include some of the AI or deep learning technology. Speed 110991205 with the Rockchip RK399 digital multicore approach supports standard digital I/O, device interfaces, application demonstration, and code.

IBM’s True North Neuromorphic ASIC, based on DARPA SyNAPSE, is an advanced solution. It has 4096 cores, implements 268,000,000 programmable synapses using 5.4 billion transistors. Its ultra-low power consumption of 70mW is impressive. IBM claims that it is 1/100,00000 more efficient than conventional processors.

Michal Pukala
Electronics and Telecommunications engineer with Electro-energetics Master degree graduation. Lightning designer experienced engineer. Currently working in IT industry.

News