Common Sense Commentary: Technological developments and recent signs of the latter days are somewhat like a heavily loaded freight train going downhill with brakes burned out. Its speed, regardless of braking, continues to increase as it descends, faster and faster to the bottom ... where the track ends. Enjoy the trip ... Jesus will return before this trip terminates at the bottom of time.
The Image of the Beast
Revelation 13:13-15
"And he doeth great wonders, so that he maketh fire come down from heaven on the earth in the sight of men, And deceiveth them that dwell on the earth by the means of those miracles which he had power to do in the sight of the beast; saying to them that dwell on the earth, that they should make an image to the beast, which had the wound by a sword, and did live. And he had power to give life unto the image of the beast, that the image of the beast should both speak, and cause that as many as would not worship the image of the beast should be killed."
The "Image of the Beast" is probably a technological means of replicating the mind and global agenda of the Beast in a virtual, cyber, robotic form. RB
Another Brain/Machine combination
development article I found in the
British, Daily Mail under Science &
Tech, is this: RB
The real-life Matrix: MIT researchers
reveal interface that can allow a
computer to plug into the brain
.
It has been the holy grail of science fiction - an interface that allows us to plug our brain into a computer. Now, researchers at MIT have revealed new fibers less than a width of a hair that could make it a reality. They say their system that could deliver optical signals and drugs directly into the brain, along with electrical readouts to continuously monitor the effects of the various inputs.
HOW IT WORKS
The new fibers are made of polymers that closely resemble the characteristics of neural tissues. Combining the different channels could enable precision mapping of neural activity, and ultimately treatment of neurological disorders, that would not be possible with single-function neural probes. 'We're building neural interfaces that will interact with tissues in a more organic way than devices that have been used previously,' said MIT's Polina Anikeeva, an assistant professor of materials science and engineering. The human brain's complexity makes it extremely challenging to study not only because of its sheer size, but also because of the variety of signaling methods it uses simultaneously.
____________________________________
The absolute global control the Antichrist will have on the entire world of people would seem to indicate his use of ultra high-tech computer development which enables him to maintain a detailed record of every human being on Earth and all human necessities and services. This global ID system will obviously give the beast or Antichrist the ability to not only "equalize" goods and services globally, but also give him the means of cutting off those things to people unmarked, unidentifiable, uncooperative or undesirable.
Even now our government and those of other countries are developing human records warehouses of personal information. They are letting out contracts to hi-tech companies to develop and run these so called "Information Warehouses". RB
This From The Weekly Standard
National and Global Warehouses of personal Information being developed on every human being on earth.
Feds Looking for Company to Run 'National Data Warehouse' for Obamacare, Medicare
The Department of Health and Human Services (HHS) is looking for vendors to run its "National Data Warehouse," a database for "capturing, aggregating, and analyzing information" related to beneficiary and customer experiences with Medicare and the federal Obamacare marketplaces. Although the database primarily consists of quality control metrics related to individuals' interactions with customer service, potential contractors are to "demonstrate ... experience with scalability and security in protecting data and information with customer, person-sensitive information including Personal Health Information and Personally Identifiable information (personal health records, etc.)." Vendors are also instructed that one of the requirements of a possible future contract would be "ensuring that all products developed and delivered adhere to Health Insurance Portability and Accountability Act (HIPAA) compliance standards."
For a number of years, the Centers for Medicare and Medicaid Services (CMS), the division of HHS responsible for Medicare and now Obamacare also, has maintained a "national data warehouse" (NDW) related to the 1-800-MEDICARE helpline. The passage of the Affordable Care Act and subsequent establishment of the Marketplaces has expanded the scope of the NDW. The CMS notice explains the NDW as follows:
The NDW acts as the central repository for capturing, aggregating, and analyzing information related to the beneficiary experience with Medicare and the consumer experience with Marketplaces. The NDW also serves as a foundation for operational and management reporting to support improved decision-making, business practices, and services to callers.
The type of data included in the NDW "includes information for CMS’ Virtual Contact Center operations including, but not necessarily limited to" items such as "Workforce management data,""Quality monitoring," "Medicare disenrollments," "Beneficiary satisfaction surveys," and "Web Chat metrics." The NDW is part of CMS's larger $15 billion "Virtual Data Center" program awarded to multiple vendors in 2012. The eventual vendor for the NDW must be able to integrate and share data with the other Virtual Data Center vendors.
The description for the "NDW Functional Requirements" included thirty-six items, several with multiple subpoints, and even this list is not meant to be "all inclusive" according to CMS. In addition to these functions, the "contractor shall implement a security program that adheres to CMS security standards." Interested vendors have until January 19, 2015, to respond.
_______________________
And this from Discover Magazine.comAvengers: Age of Ultron’ and
the Risks of Artificial Intelligence
Technology enhanced with artificial intelligence is all around us. You might have a robot vacuum cleaner ready to leap into action to clean up your kitchen floor. Maybe you asked Siri or Google—two apps using decent examples of artificial intelligence technology—for some help already today. The continual enhancement of AI and its increased presence in our world speak to achievements in science and engineering that have tremendous potential to improve our lives.
Or destroy us.
At least, that’s the central theme in the new Avengers: Age of Ultron movie with headliner Ultron serving as exemplar for AI gone bad. It’s a timely theme, given some high-profile AI concerns lately. But is it something we should be worried about?
Artificial Intelligence Gone Rogue
How bad is Ultron? The Official Handbook of the Marvel Universe lists his occupation as “would-be conquerer, enslaver of men” with genius intelligence, superhuman speed, stamina, reflexes, and strength, subsonic flight speed, and demi-godlike durability. The good news is that Ultron has “normal” agility and “average hand to hand skills.” Meaning if you can get in close to an autonomous robot with superhuman speed, you should be good to go. At least briefly.
But perhaps most importantly, Ultron represents the ultimate example of artificial intelligence applications gone wrong: intelligence that seeks to overthrow the humans who created it.
Subsequent iterations of Ultron were self-created, each one getting stronger, smarter, and more bent on fulfilling two main desires: survival and bringing peace and order to the universe. The unfortunate part for us humans is that Ultron would like to bring peace and order by eliminating all other intelligent life in the universe. The main theme in Age of Ultron is this fictional conflict between biological beings and artificial intelligence (with a mean streak). But how fictional is it?
Thinking Machines
The answers are found in scientific research related to the fields of machine learning, artificial intelligence, and artificial life. These are fields that continue to expand at a ridiculous, if not superhuman, pace.
One of the most recent breakthroughs was a study in which Volodymyr Mnih and colleagues at Google DeepMind challenged a neural network to learn how to play video games.
The point was to see if the software (rather ominously called a “deep Q-network agent”) could apply lessons learned in one game to master another game. For more than half of the games examined, the deep Q-network agent was better than human level. This list includes Boxing, Video Pinball, Robotank (a favorite of mine), and Tutankham.
And though arcade games may seem trivial, the takeaway here really had nothing to do with games per se. The relevance is that an AI system could adapt its skills to situations for which its programmer had never prepared it. The AI was effectively learning how to apply skills in a new way, basically thinking on its own. Which is relevant in considering the possibility of an AI going rogue.
IBM's Watson computer is a well-known instance of AI. Credit: Clockready IBM’s Watson computer is a well-known instance of AI. Credit: Clockready Sounding an Alarm
So, is this a problem? Coverage in popular media often seems to give the spin that machine learning and artificial intelligence are things to fear. There is a boundary that separates helpful applications of AI—imagine a scenario of robot-conducted surgery performed in a remote community and overseen by a physician in a distant location—from truly frightening scenarios of near-future military applications. Imagine the combination of current combat drone technology with artificial intelligence computer engines giving independence to machine warfare.
The real problem is that we don’t often recognize that we have crossed these kinds of boundaries until we are already on the other side. In science we often push to discover and apply things before we truly understand all the implications—both positive and negative—that will accompany them. We often do things because we can without fully considering if we should, in fact, do them at all.
It’s a sentiment that has been surprisingly echoed among various tech cognoscenti in recent months. In late 2014, Tesla CEO Elon Musk told an MIT symposium, “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that.” And in January he put his money behind the cause, donating $10 million to a non-profit for AI safety.
Bill Gates revealed his reservations about AI in a Reddit “Ask Me Anything” session later that same month, writing, “I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”
And last year, Stephen Hawking co-authored an article on the risks of AI, saying it could be the “worst mistake in history.”
A Different Vision
Central to these concerns is artificial intelligence’s theoretical independence from human regulatory interaction. To avoid such extreme independence—and that sci-fi end-game of Ultron—maybe we’d be better off adopting the approach of “collaborative intelligence” as computer scientist Susan Epstein proposed in a recent study.
We traditionally build machines because we need help, Epstein writes. But perhaps a less-capable machine could be equally helpful, by allowing humans to do things that they’re better at anyway, such as pattern recognition and problem solving. In other words, built-in inabilities in our intelligent robots could allow them to perform their jobs better while keeping them in check—though at the cost of requiring more interaction with their human overseers.
In the tradition of sci-fi futurists Jules Verne, H.G. Wells and Isaac Asimov, the future is “supposed to be a fully automated, atomic-powered, germ-free utopia” Daniel H. Wilson wrote some years back. A collaborative view of AI, on the other hand, equates to thinking about robots as tools—sometimes very smart ones—that humans can employ and work with rather than a replacement for humans altogether.
This view, though, is at odds with the imperative to instrument and mechanize operations of all sorts wherever they are found. The end game—as Ultron’s creators discover—has disastrous ramifications. We all get to enjoy watching this dystopian future play out on the big screen this week. Luckily for our future selves, in the real world these conversations are still happening as we continue to progress toward smarter and smarter machines.
But maybe not too smart. I still want to win at Robotank.
Top image courtesy Marvel