In 1952 and 1956, the phrases “machine learning” and “artificial intelligence” were coined,respectively. Fast forward to 2010, when researchers George Dahl and Abdel-Rahman Mohamed demonstrated that deep learning voice recognition technologies might outperform current state-of- the-art industry solutions. At the same time, Google announced Waymo, their self-driving car initiative. In September 2010, DeepMind, a pioneer in the disciplines of AI and deep learning, was founded. Later on, we’ll learn more about them.

Machine learning is one of the most intriguing research tools to enter the material science toolkit in recent years. This set of statistical tools has already demonstrated its ability to significantly accelerate both fundamental and practical research. At the moment, there is a flood of effort being done to create and recognize patterns to solid-state systems. Machine learning has undeniably become one of the most powerful technologies of the previous decade. Machine learning places a strong focus on “learning,” which helps computers to make progressively better decisions based on prior experiences.

Significant technological advancements have enabled recent developments in business intelligence that use abilities ranging from facial recognition to natural language processing to encourage faster and more efficient business intelligence. True artificial intelligence (a machine or program that thinks and communicates like a human) is still a long way off. Individual machine learning systems, on the other hand, have been trained to specialize in specific activities that are highly valuable.

The availability of massive datasets, combined with improvements in algorithms and exponential increases in computing power, has resulted in an unprecedented surge of interest in the field of machine learning in recent years. Machine learning methods are now widely used for large-scale classification, regression, clustering, and dimensionality reduction problems involving high- dimensional input data. Indeed, machine learning has demonstrated superhuman powers in a variety of sectors (such as playing go, self-driving cars, image classification, etc.). As a result, machine learning algorithms power many aspects of our daily lives, including picture and speech recognition, web searches, fraud detection, email/spam filtering, credit scores, and many others.

When Google revealed a lung cancer diagnosis supplied by artificial intelligence in 2019, it was the medical field’s turn to gain from quickly evolving AI technology. The machine outperformed human radiologists thanks to deep learning and an algorithm that evaluated computer tomography (CT) scans. Oncologists will benefit greatly from this research, as it will provide them with improved tools for order to diagnose and treat cancer. AI and machine learning will be at the forefront of the fight against the COVID-19 pandemic in 2020. Researchers are employing AI and machine learning techniques to forecast the virus’s growth, conduct virtual drug testing to identify viable treatments among current medications, and build prospective vaccines. Even robotics play a part, as seen by the deployment of social connectivity robots to assist nursing home residents in
staying in touch with loved ones during the quarantine.

The SARS-CoV-2-caused new coronavirus disease 2019 (COVID-19) pandemic continues to pose a serious and immediate threat to world health. The outbreak began in the People’s Republic of China’s Hubei region in early December 2019 and has since spread throughout the world. As of May 2022, there are more than 532,000,0000 confirmed cases of the disease in more than 180 nations, while the actual number of persons afflicted is likely far higher. COVID-191 has claimed the lives of almost 6 million people.

This pandemic continues to pose a number of challenges to medical systems around the world, including increased demand for hospital beds and critical shortages of medical equipment, as well as the infection of many healthcare personnel. As a result, the ability to make quick clinical choices and make efficient use of healthcare resources is critical. In developing nations, the most widely used COVID-19 diagnosing test, reverse transcriptase polymerase chain reaction (RT-PCR), has long been in short supply. This leads to an increase in infection rates and the postponement of crucial preventive actions.

Image classification, speech recognition, machine translation, object recognition, and other CV tasks have demonstrated promising outcomes using AI-based algorithms. Deeper network designs, powerful computation platforms, and access to large scale benchmark data sets have all contributed to recent advances in AI techniques. Because of their ability to learn and represent characteristics automatically, DL methods have provided more promising results for many challenging CV tasks than classic ML approaches. This reduces the need to manually construct features based on human expertise, resulting in improved classification and regression accuracy.

DL-based approaches have been successful in handling a variety of issues, but they have two major drawbacks:
(1) They are exceedingly difficult to train, and
(2) They require a lot of training data.

The first issue is typically addressed by writing and running highly optimized code on powerful GPU-based machines. Using data generating techniques like GANs, the latter problem can also be solved. GAN’s fundamental goal is to produce fresh data that is as close to the original training data as possible. The DL networks are then trained using this data as well as the original data.

Aside from the well-established process for detecting COVID-19-infected persons, there is a
pressing need to create auxiliary instruments for the detection and monitoring of positive cases. Certain characteristics associated with COVID-19 can be found in CT and CXR imaging of the lungs. By sending the input data through a series of convolutional layers, deep learning algorithms like CNN learn the patterns in such images sequentially. The network’s initial layers collect low- level features like edges, lines, and corners, while the latter layers derive highly abstract features that can assist capture the most notable trait that distinguishes COVID-19 from other cases.

It’s been a long time since AI systems were used to investigate, predict, and analyze diseases.
MYCIN, the first program of its kind, was created in 1976 and operated and prescribed antibiotics for a bacterial infection. Such methods have been used by many healthcare experts not just to identify ailments, but also to formulate pharmaceuticals, analyze medical images collected for clinical trials, and anticipate pandemics. Many instances of machine learning and artificial intelligence (AI) medical technologies for non-infectious (diabetes, cancer, Parkinson’s, heart disease, etc.) and infectious (HIV, Ebola, SARS, and COVID-19) disorders have been produced.

This isn’t likely to be the final pandemic, as the world has seen numerous similar crises in the past. A critical step today is to learn from existing pandemics and prepare for future difficulties. The models and preparations should be done with the future in mind. The spread of such diseases can be greatly reduced by raising public awareness and ensuring a balanced distribution of resources. Machine learning machines are gradually trained to discover patterns and uncover relationships between qualities and features more efficiently than people, similar to how the industrial revolution resulted in the invention of machines that could execute mechanical jobs more efficiently than humans. When it comes to the future of research, there will be a clear distinction between approaches based on data availability.

About Writer:

Rubel Sheikh is a lecturer of Department of Computer Science and Engineering in Daffodil International University since January 2021. He has M.S. and B.Sc. in Computer Science and Engineering from Jahangirnagar University with a good academic achievements. He is doing his researches in several fields of Machine Learning.