How do Convolutional Neural Networks Work?
Trend

How do Convolutional Neural Networks Work?

Breakthroughs in deep learning in recent years have come from the development of Convolutional Neural Networks (CNNs or ConvNets). It is the main force in the development of the deep neural network field, and it can even be more accurate than humans in image recognition.
Published: Oct 06, 2022
How do Convolutional Neural Networks Work?

What is a Convolutional Neural Network?

Convolutional Neural Network is a feed-forward neural network whose artificial neurons can respond to surrounding units within a partial coverage area and has excellent performance for large-scale image processing. A convolutional neural network consists of one or more convolutional layers and a top fully connected layer, as well as associated weights and pooling layers. This structure enables convolutional neural networks to exploit the two-dimensional structure of the input data. Compared to other deep learning architectures, convolutional neural networks can give better results in image and speech recognition. This model can also be trained using the backpropagation algorithm. Compared to other deep, feed-forward neural networks, convolutional neural networks have fewer parameters to consider, making them an attractive deep learning architecture.

The Convolutional Neural Network is powerful in image recognition, and many image recognition models are also extended based on the CNN architecture. It is also worth mentioning that the CNN model is a deep learning model established by referring to the visual organization of the human brain. Learning CNN will help me learn other deep learning models.

Feature:

CNN compares parts of the image, which are called features. By comparing rough features at similar locations, CNNs are better at distinguishing whether images are the same or not, rather than comparing whole images. Each feature in an image is like a smaller image, that is, a smaller two-dimensional matrix and these features capture common elements in the image.

Convolution:

Whenever a CNN resolves a new image, without knowing where the above features are, the CNN compares anywhere in the image. To calculate how many matching features are in the whole image, we create a filtering mechanism here. The mathematical principle behind this mechanism is called convolution, which is where the name CNN comes from.

The basic principle of convolution is to calculate the degree of conformity between the feature and the image part, if the value of each pixel of the two is multiplied, and then the sum is divided by the number of pixels. If every pixel of the two images matches, sum these products and divide by the number of pixels to get 1. Conversely, if the two pixels are completely different, you will get -1. By repeating the above process and summarizing various possible features in the image, convolution can be completed. Based on the values and positions of each convolution, make a new 2D matrix. This is the original image filtered by the feature, which can tell us where to find the feature in the original image. The part with a value closer to 1 is more consistent with the feature, the closer the value is to -1, the greater the difference; as for the part with a value close to 0, there is almost no similarity at all. The next step is to apply the same method to different features, and convolutions in various parts of the image. Finally, we will get a set of filtered original images, each of which corresponds to a feature. Simply think of the entire convolution operation as a single processing step. In the operation of CNNs, this step is called a convolutional layer, which means that there are more layers to follow.

The operation principle of CNN is computationally intensive. While we can explain how a CNN works on just one piece of paper, the number of additions, multiplications, and divisions can increase quickly along the way. With so many factors affecting the number of computations, the problems that CNN's deal with can become complex with little effort, and it is no wonder that some chipmakers are designing and building specialized chips for the computational demands of CNNs.

Pooling:

Pooling is a method of compressing images and retaining important information. Its working principle can be understood with only a second degree in mathematics. Pooling will select different windows on the image, and select a maximum value within this window range. In practice, a square with a side length of two or three is an ideal setting with a two-pixel stride.

After the original image is pooled, the number of pixels it contains will be reduced to a quarter of the original, but because the pooled image contains the maximum value of each range in the original image, it still retains each range and each range. The degree of conformity of the characteristics. The pooled information is more focused on whether there are matching features in the image, rather than where these features exist in the image. Can help CNN to determine whether a feature is included in the image without having to be distracted by the location of the feature.

The function of the pooling layer is to pool one or some pictures into smaller pictures. We end up with an image with the same number of pixels, but with fewer pixels. Helps to improve the computationally expensive problem just mentioned. Reducing an 8-megapixel image to 2 megapixels beforehand can make subsequent work easier.

Linear rectifier unit:

An important step in the CNN is the Rectified Linear Unit (ReLU), which mathematically converts all negative numbers on the image to 0. This trick prevents CNNs from approaching 0 or infinity. The result after linear rectification will have the same number of pixels as the original image, except that all negative values will be replaced with zeros.

Deep learning:

After being filtered, rectified, and pooled, the original image will become a set of smaller images containing feature information. These images can then be filtered and compressed again, and their features will become more complex with each processing, and the images will become smaller. The final, lower-level processing layers contain simple features such as corners or light spots. Higher-order processing layers contain more complex features, such as shapes or patterns, and these higher-order features are usually well-recognized.

Fully connected layer:

Fully connected layers will collect the filtered pictures at a high-level, and convert this feature information into votes. In the traditional neural network architecture, the role of the fully connected layer is the main primary building block. When we input an image to this unit, it treats all pixel values as a one-dimensional list, rather than the previous two-dimensional matrix. Each value in the list determines whether the symbol in the picture is a circle or a cross. Since some values are better at discriminating forks and others are better at discriminating circles, these values will get more votes than others. The number of votes cast by all values for different options will be expressed in terms of weight or connection strength. So, every time CNN judges a new image, the image will go through many lower layers before reaching the fully connected layer. After voting, the option with the most votes will become the category for this image.

Like other layers, multiple fully-connected layers can be combined because their inputs (lists) and outputs (votes) are in similar forms. In practice, it is possible to combine multiple fully-connected layers, with several virtual, hidden voting options appearing on several of them. Whenever add a fully connected layer, the entire neural network can learn more complex feature combinations and make more accurate judgments.

Backpropagation:

The machine learning trick of backpropagation can help us decide the weights. To use backpropagation, need to prepare some pictures that already have the answer, and then must prepare an untrained CNN where the values of any pixels, features, weights, and fully connected layers are randomly determined. You can train this CNN with a labeled image.

After CNN processing, each image will eventually have a round of the election to determine the category. Compared with the previously marked positive solution, it is the identification error. By adjusting the features and weights, the error generated by the election is reduced. After each adjustment, these features and weights are fine-tuned a little higher or lower, the error is recalculated, and the adjustments that successfully reduced the error are retained. So, when we adjust each pixel in the convolutional layer and each weight in the fully connected layer, we can get a set of weights that are slightly better at judging the current image. Then repeat the above steps to identify more tagged images. During the training process, misjudgments in individual pictures will pass, but common features and weights in these pictures will remain. If there are enough labeled images, the values of these features and weights will eventually approach a steady state that is good at recognizing most images. But backpropagation is also a very computationally expensive step.

Hyperparameters:

  • How many features should be in each convolutional layer? How many pixels should be in each feature?
  • What is the window size in each pooling layer? How long should the interval be?
  • How many hidden neurons (options) should each additional fully connected layer have?

In addition to these issues, we need to consider many high-level structural issues, such as how many processing layers should be in a CNN and in what order. Some deep neural networks may include thousands of processing layers, and there are many design possibilities. With so many permutations, we can only test a small subset of the CNN settings. Therefore, the design of CNN usually evolves with the knowledge accumulated by the machine learning community, and occasionally there are some unexpected improvements in performance. In addition, many improvement techniques have been tested and found to be effective, such as using new processing layers or connecting different processing layers in more complex ways.

Published by Oct 06, 2022 Source :mcknote

Further reading

You might also be interested in ...

Headline
Trend
Powering the Future: New Energy Vehicles, Sustainable Manufacturing, and Challenges
In the quest for a sustainable and eco-friendly future, the automotive industry is witnessing a profound transformation with the emergence of New Energy Vehicles. New Energy Vehicles, commonly known as NEVs, encompass a wide range of vehicles powered by alternative energy sources or a combination of traditional and renewable energy technologies. The implementation of sustainable manufacturing practices and collaboration among stakeholders presents challenges for NEV development as well as great potential for market growth.
Headline
Trend
Charging Ahead: Recharging Infrastructure in the Electric Vehicle Industry
As the electric vehicle (EV) revolution gains momentum worldwide, one of the critical pillars supporting this transition is the development of a robust recharging infrastructure network. This network plays a pivotal role in the widespread adoption of electric vehicles, ensuring convenience, accessibility, and sustainability for EV owners. Factors contributing to the acceptance of EVs and their associated recharging infrastructure include environmental awareness, advancements in battery technology, vehicle design, the expanding range of available EV models, and the implementation of government incentives to promote these new technologies.
Headline
Trend
Beyond Driving: The Future Landscape of Smart Automobile Technology
As the smart automotive industry embraces the shift toward sustainability, innovation, and connectivity, the manufacturing of Electric Automobiles (EVs) and New Energy Vehicles (NEVs) is shaping the future of transportation. Let’s explore some of the dynamic technology and key factors driving their evolution.
Headline
Trend
Driving Intelligence: The Evolution of Smart Automobile Technology
With the growing acceptance of New Electric Vehicles (NEVs), smart automobile technology has emerged as a fundamental force reshaping the automotive industry. From advanced connectivity and intelligent sensors to artificial intelligence (AI) and Internet of Things (IoT) integration, modern vehicles are evolving into sophisticated, interconnected systems. The manufacturing process of smart electric automobiles and NEVs requires the integration of these various technologies to fully realize benefits such as safety and efficiency, while also addressing evolving regulatory challenges and standards.
Headline
Trend
Riding Strong: Bicycle Frame Materials from Steel to Carbon Fiber
The choice of frame material is a critical decision for cyclists, influencing the performance, comfort, and overall riding experience of a bicycle. From the classic strength of steel to the lightweight versatility of carbon fiber, different materials offer unique properties and characteristics that cater to different riding styles, terrains, and budgets. A good understanding of bicycle frame materials, developing trends and advancements, will help in choosing the right frame material.
Headline
Trend
Electrifying Change: The Impact of E-Bikes on the Bicycle Industry
Electric bicycles, or e-bikes, are reshaping how people commute, exercise, and experience cycling. These innovative vehicles combine the convenience of traditional bicycles with electric propulsion, offering riders enhanced mobility and a more enjoyable riding experience. The impact of e-bikes on the bicycle industry, has brought about new market trends, regulatory challenges, environmental benefits, and future innovations.
Headline
Trend
Virtual Reality Headsets: Applications in the Modern World
In recent years, Virtual Reality (VR) headsets have captured the attention of tech enthusiasts, gamers, and businesses alike, promising immersive experiences that redefine the limits of digital interaction. The demand for VR headsets is expanding across multiple industries, from gaming to healthcare and education, finding many unique applications and benefits. Taiwan, a significant player in electronics manufacturing, has been pivotal in bringing many of these developments to market.
Headline
Trend
USB Flash Drives: Evolution, Trends, and Future Outlook
USB flash drives, commonly known as thumb drives, memory sticks, or USB sticks, are compact, versatile storage devices that have become indispensable tools for data storage, transfer, and backup. Introduced in the early 2000s, USB flash drives offered a groundbreaking solution for portable data storage, replacing older forms like floppy disks and rewritable CDs. Taiwan has played a unique role in the technology development and manufacturing behind these versatile storage devices.
Headline
Trend
Solar Panels with ESS: Sustainable Energy for a Resilient Future
Solar panels combined with Energy Storage Systems (ESS) not only harness the sun’s power but also ensure that energy is stored for future use, making it reliable and consistent. Solar panels with ESS play a critical role in providing energy resilience, reducing emissions, decreasing reliance on fossil fuels, and creating a sustainable future for both residential and commercial energy needs.
Headline
Trend
Vacuum Packaging Machines: Improving Packaging Technology
Vacuum packaging machines have revolutionized the food, pharmaceutical, and industrial packaging industries by providing an efficient means of extending shelf life, maintaining product quality, and improving packaging efficiency. From their early inception to the cutting-edge technologies used today, vacuum packaging machines have seen significant advancements in design and application.
Headline
Trend
Webcam Evolution, Technology, and Trends
Webcams have become an integral part of modern life, serving purposes ranging from casual video calls to professional content creation, security, and even healthcare. Originally designed for basic video communication, webcams have evolved significantly to include HD and even 4K video, specialized microphones, AI-enhanced features, and diverse applications across various industries.
Headline
Trend
Lithium-Ion Batteries: The Power Behind Modern Innovation
Lithium-ion (Li-ion) batteries provide the power for many devices and technologies that define modern life. From smartphones to electric vehicles (EVs), their lightweight and high-energy storage capabilities make them indispensable. Their underlying technology has led to the development of different types, unique applications, and a global manufacturing landscape that has seen a growing role in this dynamic industry.
Agree