Pages

  • Home
  • Blog Archive
  • Blog Submission
  • About Us
  • Contact Us

Friday, 6 July 2018

Artificial Intelligence (Part-I)- The Driving Force of Modern Digitalization

Image for representative purpose only.

Introduce Yourself to the World of Artificial Intelligence, a Giant Leap Towards Modern Digitalization


Artificial Intelligence (AI) is the most popular and desirable tool for ground breaking development in almost every field of science and technology. This tool is continuously paving the way for modern digitalization thus providing better performance and greatly reducing human effort. We will present you with a series of blogs to know and explore about this interesting field.

Introducing Artificial Intelligence

Artificial intelligence is a kind of intelligence exhibit by machines in comparison to the natural intelligence (NI) spread out by humans and other animals. In computer science AI research is described as the study of "intelligent agents": any device that recognize its environment and takes actions that maximize its chance of successfully achieving its goals. Colloquially, the term "artificial intelligence" is enforced when a machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving". Once the President of Future of Life Institute, Mr. Max Tegmark  quoted that-

"Everything we love about civilization is a product of intelligence, so amplifying our human intelligence with artificial intelligence has the potential of helping civilization flourish like never before – as long as we manage to keep the technology beneficial".

Starting from the world best AI driven applications up to self-driven car, AI is progressing rapidly. Artificial intelligence today is properly known as narrow AI (or weak AI), in that it is designed to perform a narrow task (e.g. only facial recognition or only internet searches or only driving a car). However, the long-term targets of many researchers is to develop general AI. While narrow AI may perform whatever its specific task is, like solving equations, AI would outperform humans at nearly every mental task. The ambit of AI is disputed: as machines become increasingly proficient, tasks considered as requiring "intelligence" are often eliminated from the definition, a method known as the AI effect, initializing to the quip, "AI is whatever hasn't been done yet." For instance, optical character recognition is frequently excluded from "artificial intelligence", having become a routine technology. Efficiency generally classified as AI as of 2017 include successfully understanding man’s speech, challenging at the greatest level in strategic game systems, autonomous cars, intelligent routing in content delivery network and military counterfeiting. The field started on the claim that human intelligence "can be so minutely demonstrated that a machine can be made to counterfeit it". This raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence which are issues that have been explored by illusion, fiction and philosophy since the end. Some people also consider AI to be an emergency to humanity if it progresses unabatedly. Others believe that AI, unlike previous technological revolutions, will create a risk of mass unemployment. In the twenty-first century, AI techniques have accomplished a resurgence following circumstantial advances in computer power, large amounts of data, and theoretical understanding; and AI techniques have become an essential part of the technology industry, helping to clarify many confronting problems in computer science.

Phrase History


Artificial intelligence was developed as an academic discipline in 1956, and in the years since has accomplished several waves of anticipation, followed by disappointment and the loss of funding. The study of mechanical or "formal" reasoning began with philosophers and mathematicians in antiquity. The study of mathematical logic led directly to Alan Turing's theory of computation, which advised that a machine, by shuffling symbols as simple as "0" and "1", could simulate any conceivable act of mathematical deduction. This acumen, that digital computers can counterfeit any process of formal reasoning, is known as the Church–Turing thesis. Along with concurrent discoveries in neurobiology, information theory and cybernetics, this led researchers to contemplate the possibility of building an electronic brain. Turing expected that "if a human could not distinguish between replies from a machine and a human, the machine could be considered “intelligent". The first work that is now generally recognized as AI was McCullouch and Pitts' 1943 formal design for Turing-complete "artificial neurons". According to Bloomberg's Jack Clark, in the year 2015 was a landmark year for artificial intelligence, with the number of software projects that use AI within Google increased from a "sporadic usage" in 2012 to more than 2,700 projects. Clark also presents factual data indicating that error rates in image processing tasks have fallen significantly since 2011. He aspect this to an increase in affordable neural networks, due to a rise in cloud computing infrastructure and to an increase in research tools and data sets. Other cited examples include Microsoft's development of a Skype system that can automatically translate from one language to another and Facebook's system that can describe images to blind people.



No comments:

Post a Comment