Showing posts with label Drones. Show all posts
Showing posts with label Drones. Show all posts

Friday, April 1, 2016

TERMINATOR NOW: The Rise Of The Machines And A Post-Human Future - Man Builds "Scarlett Johansson" Robot From Scratch To "Fulfil Childhood Dream", And It's SCARILY LIFELIKE!

The robot has been modelled on a Hollywood actress. REUTERS/Bobby Yip

April 1, 2016 - TECHNOLOGY - A humanoid obsessive has built an incredibly realistic female robot from scratch - and it's got more than a passing resemblance to Avengers star Scarlett Johansson.

Ricky Ma, a 42-year-old product and graphic designer, has spent more than $50,000 (£34,000) and a year and a half creating the female robot prototype, Mark 1.

The designer confirmed the scarily lifelike humanoid had been modelled on a Hollywood star, but wanted to keep her name under wraps.

It responds to a set of programmed verbal commands spoken into a microphone and has moving facial expressions, but Ricky says creating it wasn't easy.He said he was not aware of anyone else in Hong Kong building humanoid robots as a hobby and that few in the city understood his ambition.

Ricky said: "When I was a child, I liked robots. Why? Because I liked watching animation. All children loved it. There were Transformers, cartoons about robots fighting each other and games about robots.

"After I grew up, I wanted to make one. But during this process, a lot of people would say things like, 'Are you stupid? This takes a lot of money. Do you even know how to do it? It's really hard'."Besides movements of its arms and legs, turning its head and bowing, Ma's robot, with blonde hair and hazel eyes can form detailed facial expressions. Ricky has dressed 'her' in a crop top and a grey skirt.


It has cost Ricky more than £34k to achieve his childhood dream. REUTERS/Bobby Yip

Scarlett Johansson as she looks in real life. Getty

Ricky has dressed the model in a white crop top and grey skirt. REUTERS/Bobby Yip

In response to the compliment, "Mark 1, you are so beautiful", the robot bows as the 'muscles' around its eyes relax and corners of its lips lift, forming a smile.

It then replies: "Hehe, thank you."

A 3D-printed skeleton lies beneath Mark 1's silicone skin, covering its mechanical and electronic interior.

About 70% of its body was created using 3D printing technology.

Creating the robot, Ma adopted a trial-and-error method in which he encountered obstacles ranging from burned-out motors to the robot losing its balance.

"When I started building it, I realised it would involve dynamics, electromechanics and programming. I have never studied programming, how was I supposed to code?


The creation took Ricky more than a year and a half to make. REUTERS/Bobby Yip

The robot can smile and respond to basic commands. REUTERS/Bobby Yip

Ricky hopes investors will buy his prototype robot. REUTERS/Bobby Yip

"Additionally, I needed to build 3D models for all the parts inside the robot. Also, I had to make sure the robot's external skin and its internal parts could fit together. When you look at everything together, it was really difficult," said Ma.

But with Mark 1 standing behind him, Ma said he had no regrets.

"I figured I should just do it when the timing is right and realise my dream. If I realise my dream, I will have no regrets in life," he said.Ma, who believes the importance of robots will grow, hopes an investor will buy his prototype, giving him the capital to build more.

He wants to write a book about his experience to help other enthusiasts.

The rise of robots is among disruptive labour market changes that the World Economic Forum warns will lead to a net loss of 5.1 million jobs over the next five years.


WATCH: $50,000 robot that looks like Scarlett Johnansson.



- Mirror.




Thursday, April 9, 2015

TERMINATOR NOW: The Rise Of The Machines - U.S. Pentagon Gears Up For Robot Warfare; United Nations Urged To Ban "Killer Robots" Before They Can Be Developed!

Before leaving office, Defense Secretary Chuck Hagel got a look at high-tech projects being developed by the Defense Advanced Research Projects Agency.
Brad Tousley demonstrated a robot that would assist wounded warriors. (Associated Press)  

April 9, 2015 - UNITED STATES
- Deputy Defense Secretary Robert Work on Wednesday outlined the Pentagon’s plans for an advanced war-fighting strategy involving robot weapons and remote-controlled warfare.

In a speech to the Army War College Strategy Conference, Mr. Work said the “third offset strategy” will rely heavily on autonomous systems that will allow machines and U.S. technological superiority to win wars.

The strategy follows two earlier “offsets” — the use of asymmetric means to counter enemy advantages. During the Cold War, strategic deterrence and tactical nuclear arms were used to offset the Soviet Union’s ground force numerical advantages. In the 1970s, precision-guided conventional weapons were deployed to offset the quantitative shortcomings of foreign conventional forces.

Mr. Work said precision-guided warfare is reaching the end of its shelf life as foreign states have developed countermeasures.

The third offset will be designed to defeat states like China, which is developing niche, offset weapons such as anti-ship ballistic missiles and anti-satellite arms.

“The real essence of the third offset strategy is to find multiple different attacks against opponents across all domains so they can’t adapt, or they adjust to just one, and they died before they can adapt again,” he said.
Mr. Work said defense strategists are divided between those who seek to continue to focus on low-end conflict and those who say future wars will require high-end forces for use against competitor states with large militaries, like China and Russia. “We don’t have an answer right now” on which direction the Pentagon will go, he said.

The deputy defense secretary said the offset strategy calls for adapting “three-play chess” to modern warfare, in which U.S. military forces will employ highly skilled people operating advanced technological machines against less-capable forces.

Mr. Work said the “Air Sea Battle” concept, designed to break into Asia against Chinese missiles and submarines, has evolved into “Air Land Battle 2.0.”

“Air Sea Battle, in my view, kind of went wrong,” said Mr. Work, one of the concept’s architects.

The revised concept will involve avoiding being targeted by massive Chinese missile salvos or submarine attacks through “getting into their networks, blowing them up and keep them from seeing you,” he said.

Next, salvo attacks will be countered with defenses designed to hit missiles and destroy submarines and missile-carrying bombers before they fire. Last, after surviving the massed strikes, joint assault forces will be injected to make it an “air-land battle.”

“I believe that what the third offset strategy will revolve around will be three-play combat in each dimension,” Mr. Work said. “And three-play combat will be much different in each dimension [air, sea, land], and it will be up for the people who live and fight in that dimension to figure out the rules.”

“We will have autonomy at rest, our smart systems being able to go through big data to help at the campaign level and to be able to go through big data at the tactical level. So autonomy at rest and autonomy in motion,” he said.

The most difficult domain for robots is the ground.

“Just getting robots to move over terrain is one of the most difficult things you can imagine,” Mr. Work said.

The Defense Advanced Research Projects Agency, the Pentagon’s high-tech development center, is working on a program called Squad X that is focusing on human-machine interaction at the tactical level. The program includes ground robots, microdrones and squad-sized military units equipped with intelligence and super-lethal weapons that can cover large areas.

“And this is not as far away as you might think,” Mr. Work said, noting that the Army is conducting experiments with “manned and unmanned teaming” of Apache attack helicopters.
Robot-driven vehicles also are coming, along with human-sized robots used as porters, firefighters, countermine robots, and countersniper robots.

NORTHCOM ASSESSES ISIL

The key threat posed by the al Qaeda offshoot terrorist group Islamic State is not the infiltration of fighters crossing the U.S. southern border but the group’s sophisticated social media recruitment effort.

That’s the conclusion of Navy Adm. William Gortney, commander of the Colorado-based U.S. Northern Command, which is charged with defense of the homeland.

“I don’t believe that it’s ISIL that we have to worry about infiltrating through our southern approaches,” Adm. Gortney told reporters at the Pentagon on Tuesday, using an acronym for the terrorist group.

“They are a threat to us because they’re using a very sophisticated social media campaign to incite American and Canadian citizens to do harm against American and Canadian citizens,” he said. “That’s how they are trying to attack us in that regard, through that very sophisticated social media campaign.”

In addition to frequent posts seeking recruits placed on Facebook and Twitter, the terrorist group has launched a slick English-language magazine called Dabiq that lists email addresses and an encryption key for potential recruits to contact the Islamic State.

The FBI is engaged in a major law enforcement campaign to stop would-be jihadis in the U.S. from traveling to Syria and Iraq to join the Islamic State.

FBI Special Agent Andrew McCabe, head of the FBI’s Washington field office, said the bureau is struggling to keep up with related cases, including seven people in the past two weeks linked to the Islamic State group. Other cases involve people in their early and middle teens who want to travel overseas.

“It’s not hard to anticipate that, as numbers begin to grow, at some point our traditional investigative approaches and capabilities will be outstripped by the sheer numbers we’re facing,” Mr. McCabe told CBS News on Tuesday.

Adm. Gortney said the border security problem involves “seams” in defenses that enemies are exploiting.

“And they’re going to move through those seams people, drugs, money, weapons or something even greater,” he said. “And that’s why we work so hard looking down there and trying to close those seams with our homeland partnerships and with the other geographic combatant commanders.”

OFFICIALS: VERIFY IRAN DEAL

Paula DeSutter, assistant secretary of state for verification, compliance and implementation in the George W. Bush administration, said Congress should request a formal assessment of whether the Obama administration’s nuclear deal with Iran will include adequate verification provisions to prevent Tehran from cheating.

“Congress has the authority to request a verifiability assessment of the agreement from the administration but has not done so,” Ms. DeSutter told Inside the Ring.

Sen. Bob Corker, Tennessee Republican and chairman of the Senate Committee on Foreign Relations, has introduced legislation that would require the Iran agreement to be submitted to Congress.

Under the Constitution, the Senate has the power of advice and consent on foreign treaties and agreements.
Ms. DeSutter added that, based on the preliminary framework made public last week, adequate verification does not appear possible.

“Transparency measures” announced as part of the Joint Comprehensive Plan of Action will facilitate violations at known locations but not at secret undeclared sites, she said.

Fred Fleitz
, an ex-CIA analyst and former State Department arms control official, also voiced concerns about verifying Iran’s compliance with the nuclear deal.

“I believe the verification provisions in a nuclear agreement with Iran, based on the new framework, will fall far short of what the Obama administration claimed last week,” Mr. Fleitz said.

“Intrusive inspections of Iran’s nuclear program only appear certain for its peaceful program,” he said. “Inspections of possible military-related nuclear activities would take place under the IAEA additional protocol.”

Iran has said it would adopt a limited “provisional application” of an International Atomic Energy Agency protocol, which Obama administration officials have said is the key to stringent verification.

A formal agreement is to be drawn up by June 30.

“Given Iran’s record of covert nuclear activities and apparent loopholes in the framework on requiring inspections of reports of such activities, I question whether an agreement based on the framework can provide adequate verification to assure that Iran is not pursuing a nuclear weapons program,” Mr. Fleitz said. - Washington Times.

UN urged to ban 'killer robots' before they can be developed

A protest takes place outside the London offices of the defence contractor General Atomics against drones and killer robots.
Photograph: Peter Marshall/Demotix/Corbis

Fully autonomous weapons, already denounced as “killer robots”, should be banned by international treaty before they can be developed, a new report urges the United Nations .

Under existing laws, computer programmers, manufacturers and military commanders would all escape liability for deaths caused by such machines, according to the study published on Thursday by Human Rights Watch and Harvard Law School.

Nor is there likely to be any clear legal framework in future that would establish the responsibility of those involved in producing or operating advanced weapons systems, say the authors of Mind the Gap: The Lack of Accountability for Killer Robots.

The report is released ahead of an international meeting on lethal autonomous weapons systems at the UN in Geneva starting on 13 April. The session will discuss additions to the convention on certain conventional weapons.

Also known as the inhumane weapons convention, the treaty has been regularly reinforced by new protocols on emerging military technology. Blinding laser weapons were pre-emptively outlawed in 1995 and combatant nations since 2006 have been required to remove unexploded cluster bombs.

Military deployment of the current generation of drones is defended by the Ministry of Defence and other governments on the grounds that there is always a man or woman “in the loop”, ultimately deciding whether or not to trigger a missile.

Rapid technical progress towards the next stage of automation, in which weapons may select their own targets, has alarmed scientists and human rights campaigners.

“Fully autonomous weapons do not yet exist,” the report acknowledges. “But technology is moving in their direction, and precursors are already in use or development. For example, many countries use weapons defence systems – such as the Israeli Iron Dome and the US Phalanx and C-RAM – that are programmed to respond automatically to threats from incoming munitions.

“Prototypes exist for planes that could autonomously fly on intercontinental missions [the UK’s Taranis] or take off and land on an aircraft carrier [the US’s X-47B].

“The lack of meaningful human control places fully autonomous weapons in an ambiguous and troubling position. On the one hand, while traditional weapons are tools in the hands of human beings, fully autonomous weapons, once deployed, would make their own determinations about the use of lethal force.

“They would thus challenge longstanding notions of the role of arms in armed conflict, and for some legal analyses, they would be more akin to a human soldier than to an inanimate weapon. On the other hand, fully autonomous weapons would fall far short of being human.”

The report calls for a prohibition “on the development, production and use of fully autonomous weapons through an international legally binding” agreement, and urges states to adopt similar domestic laws.

The hurdles to accountability for the production and use of fully autonomous weapons under current law are monumental, the report states. “Weapons could not be held accountable for their conduct because they could not act with criminal intent, would fall outside the jurisdiction of international tribunals and could not be punished.

“Criminal liability would likely apply only in situations where humans specifically intended to use the robots to violate the law. In the United States at least, civil liability would be virtually impossible due to the immunity granted by law to the military and its contractors and the evidentiary obstacles to products liability suits.”

Bonnie Docherty, HRW’s senior arms division researcher and the report’s lead author, said: “No accountability means no deterrence of future crimes, no retribution for victims, no social condemnation of the responsible party. The many obstacles to justice for potential victims show why we urgently need to ban fully autonomous weapons.”

Human Rights Watch is a co-founder of the Campaign to Stop Killer Robots, which is supported by more than 50 NGOs and supports a preemptive ban on the development, production and use of fully autonomous weapons. - The Guardian.



Thursday, March 5, 2015

THEATRE OF WAR: Russia's Defense Ministry Launches Massive Military Exercises In Its Southern Region - With Over 2,000 Troops, Grad Launchers And Over 500 Items Of Weaponry!

RIA Novosti / Vitaliy Ankov

March 5, 2015 - RUSSIA
- The Russian Defense Ministry has launched massive military exercises, involving over 2,000 troops and some 500 items of weaponry in southern Russia, including in the Caucasus.

The field-type exercises of the air defense forces will be taking place until April 10 in twelve military firing ranges, located in Russia's South, the North Caucasus and Crimean Federal Districts, as well as at Russian military bases in Abkhazia, South Ossetia and Armenia.

“Over 2,000 troops have been involved into the battle drills, and over 500 items of weaponry and military hardware are being used,”
Interfax quoted a statement of the Southern Military District.

The training day will last for 10 hours, and half of the drills will take place at night.

During the month-long drills, Russian troops will be practicing tactical, special and technical skills, alongside driving modern military equipment, shooting and fire control.

The drills involve self-propelled howitzers “Gvozdika”, multiple artillery rocket systems “Grad”, mortars “Podnos”, anti-tank missile systems “Konkurs”, and modern intelligence drones “Navodchik-2”.

“At the end of the field drills, tactical maneuvers are planned,”
the statement said.

Combat units will repeal strikes from the aggressor’s tactical aircraft, drones and precision weapon projectiles.”


“They will also carry out operational reconnaissance of the air situation in conditions of the electromagnetic warfare”
, it added.

On Wednesday, a NATO flotilla of six ships arrived in the Black Sea to take part in the exercises with the Bulgarian, Romanian and Turkish ships, the alliance said in a statement, stressing that the training will take course “in full compliance with international conventions”.

Russia carries out military exercises on a regular basis. The country’s south saw 1,700-strong drills of radiation resistance troops in February, and the Pacific Fleet of the Russian Navy is set to exercise in the sea of Japan. - RT.




Thursday, November 20, 2014

TERMINATOR NOW: The Rise Of The Machines - 5-Foot-Tall Autonomous "Robocops" Start Patrolling Silicon Valley!

Photo from knightscope.com

November 20, 2014 - CALIFORNIA, UNITED STATES
- Autonomous “Robocop”-style robots, equipped with microphones, speakers, cameras, laser scanners and sensors, have started to guard Silicon Valley.

The security robots, called Knightscope K5 Autonomous Data Machines, were designed by a robotics company, Knightscope, located in Mountain View, California.

The robots are programmed to notice unusual behavior and alert controllers. It also has odor and heat detectors, and can monitor pollution in carpets as well. Last but not least: with cameras, the Robocops can remember up to 300 number plates a minute, monitoring traffic.


Photo from knightscope.com

It works like this: someone steps in front of a robot, which stops and moves around the person while sending video to a control center. If a burglar doesn’t leave, then “the robot is looking at the video, listening for glass breakage, any loud sound that breaking in would cause. We'll get the license plate, picture of the vehicle, geotag location, and time,” says project co-founder Stacy Stephens.

The robotics company says that it will be placing the robots in patrol malls, offices, and local neighborhoods, as well as outdoor spaces like corporate campuses, college campuses and open air malls. Knightscope said that future growth opportunities include areas in schools, hotels, auto dealerships, stadiums, casinos, law enforcement agencies, seaports and airports.

WATCH: Robots Patrol Microsoft at TechInMotion.




The estimated crime reduction in the area would be by 50 percent with robots involved, the company claims.
Reportedly five K5 robots have been deployed to patrol in the San Francisco Bay Area. Including the Knightscope’s headquarters the robots are also guarding an “undisclosed” location in Silicon Valley, KPIX-TV reported.

The robot is 1.5 meters (5 feet) high and weighs about 136 kilograms (300 pounds), representing a combo of laser scanning, wheel encoders, inertial measurements, and GPS.


It’s also built with a button on its head for human enquiries.

“Imagine a friend that can see, hear, feel and smell that would tirelessly watch over your corporate campus or neighborhood, keep your loved ones safe and put a smile on everyone passing by,”
the company says on its website.

”Imagine if we could utilize technology to make our communities stronger and safer…..together,”
the firm added.

The robots are autonomous and have been developed to function with any interference from the people, and to avoid confrontations.

WATCH: Beta Prototype Demonstration.




The robotics company also responded to fears that no human power will be needed in security.

“I believe robots are the perfect tools to handle the monotonous and sometimes dangerous work in order to free up humans to more judiciously address activities requiring higher-level thinking, hands-on encounters or tactical planning,”
the statement in the Knightscope blog read.

So far, people have been treating the robot very nicely, seemingly not quite understanding its functions, Stacy Stephens told CBS.

“The vast majority of people see it and go, ‘Oh my God, that’s so cute.’ We’ve had people go up and hug it, and embrace it for whatever reason,”
she said.

If someone decides to attack the robot, though, the result would be first “a loud chirp,” and then louder and louder sounds. “A very, very loud alarm. Think of a car alarm but much more intense,” said Stephens. - RT.




Friday, November 14, 2014

TERMINATOR NOW: The Rise Of The Machines - Special United Nations Meeting Warns Countries That "Killer Robots" Need To Be Strictly Monitored, Urges Stricter Controls Over Autonomous Weapons Systems!


November 14, 2014 - UNITED NATIONS
- “Killer robots” – autonomous weapons systems that can identify and destroy targets in the absence of human control – should be strictly monitored to prevent violations of international or humanitarian law, nations from around the world demanded on Thursday.

The European Union, France, Spain, Austria, Ireland, the Netherlands, Croatia, Mexico and Sierra Leone, among other states, lined up at a special UN meeting in Geneva to warn of the potential dangers of this rapidly advancing technology. Several countries spoke of the need for ongoing scrutiny to ensure that the weapons conformed to the Geneva conventions’ rules on proportionality in war.

The Spanish delegation went further, invoking the possibility of a new arms race as developed countries scrambled to get ahead. Ireland, the Netherlands and other countries called for “meaningful human control” of lethal weapons to be enshrined in international law, although the meeting also admitted that the precise definition of that principle had yet to be clarified.

The Geneva meeting was the second major gathering of world powers this year to discuss the looming threat or possibility of fully self-operating lethal weapons. As such, it was an indication of mounting global concern about the technology, as its adoption by military forces gathers apace.

The US, the leader in the field, has already switched most of its aerial surveillance capabilities to unmanned aircraft – though the drones are still controlled by human pilots. It is a natural next step for the US air force to develop systems that can both deliver and then operate missiles and bombs robotically, with only minimal human intervention.

The New York Times reported
this week that Lockheed Martin has developed a long-range anti-ship missile for the US air force and navy that can fly itself, with no human touch, for hundreds of miles, changing its flight-path autonomously to avoid radar detection. Britain, Israel and Norway already carry out attacks on radar installations, tanks and ships using autonomous drones and missiles, the paper said.


A man walks past a graffiti, denouncing strikes by US drones in Yemen, painted on a wall in Sana’a. Photograph: Khaled Abdullah/Reuters


At the previous Geneva meeting on killer robots, Christof Heyns, the UN special rapporteur on extrajudicial, summary or arbitrary executions, called for an outright ban. “Machines lack morality and mortality, and as a result should not have life and death powers over humans,” he said.

Human Rights Watch, which is a co-founder of the Campaign to Stop Killer Robots, told Thursday’s plenary that a ban was the only practical solution. The group lamented the fact that the UN had spent only eight or nine days over the past two years focused on an area that was fast-moving and raised huge legal and ethical issues.

“There is a sense of urgency about how we deal with killer robots. Technology is racing ahead,” it said.

Regulation of autonomous weapons falls under the so-called “convention on certain conventional weapons” or CCW – a part of the Geneva conventions that deals with the impact of the tools of war on civilian populations. Under CCW, weapons that are deemed to affect civilians indiscriminately or to cause inhumane suffering to combatants can be banned or heavily restricted. - The Guardian.




Tuesday, November 11, 2014

TERMINATOR NOW: The Rise Of The Machines - Arms Makers Have Crossed Into TROUBLING TERRITORY; New Arms Race For Weapons Are Now Being DIRECTED BY ROBOTS, Artificial Intelligence Decides What To Target And WHOM TO KILL!


November 11, 2014 - TECHNOLOGY
- On a bright fall day last year off the coast of Southern California, an Air Force B-1 bomber launched an experimental missile that may herald the future of warfare.

Initially, pilots aboard the plane directed the missile, but halfway to its destination, it severed communication with its operators. Alone, without human oversight, the missile decided which of three ships to attack, dropping to just above the sea surface and striking a 260-foot unmanned freighter.

Warfare is increasingly guided by software. Today, armed drones can be operated by remote pilots peering into video screens thousands of miles from the battlefield. But now, some scientists say, arms makers have crossed into troubling territory: They are developing weapons that rely on artificial intelligence, not human instruction, to decide what to target and whom to kill.

As these weapons become smarter and nimbler, critics fear they will become increasingly difficult for humans to control — or to defend against. And while pinpoint accuracy could save civilian lives, critics fear weapons without human oversight could make war more likely, as easy as flipping a switch.

Britain, Israel and Norway are already deploying missiles and drones that carry out attacks against enemy radar, tanks or ships without direct human control. After launch, so-called autonomous weapons rely on artificial intelligence and sensors to select targets and to initiate an attack.

 A Long Range Anti-Ship Missile prototype, launched by a B-1 bomber, is designed to maneuver without human control.
Credit Defense Advanced Research Projects Agency

Britain’s “fire and forget” Brimstone missiles, for example, can distinguish among tanks and cars and buses without human assistance, and can hunt targets in a predesignated region without oversight. The Brimstones also communicate with one another, sharing their targets.

Armaments with even more advanced self-governance are on the drawing board, although the details usually are kept secret. “An autonomous weapons arms race is already taking place,” said Steve Omohundro, a physicist and artificial intelligence specialist at Self-Aware Systems, a research center in Palo Alto, Calif.

“They can respond faster, more efficiently and less predictably.”

WATCH: Animation of how new missiles may work.



Concerned by the prospect of a robotics arms race, representatives from dozens of nations will meet on Thursday in Geneva to consider whether development of these weapons should be restricted by the Convention on Certain Conventional Weapons. Christof Heyns, the United Nations special rapporteur on extrajudicial, summary or arbitrary executions, last year called for a moratorium on the development of these weapons.

The Pentagon has issued a directive requiring high-level authorization for the development of weapons capable of killing without human oversight. But fast-moving technology has already made the directive obsolete, some scientists say.

“Our concern is with how the targets are determined, and more importantly, who determines them,” said Peter Asaro, a co-founder and vice chairman of the International Committee for Robot Arms Control, a group of scientists that advocates restrictions on the use of military robots. “Are these human-designated targets? Or are these systems automatically deciding what is a target?”

Weapons manufacturers in the United States were the first to develop advanced autonomous weapons. An early version of the Tomahawk cruise missile had the ability to hunt for Soviet ships over the horizon without direct human control. It was withdrawn in the early 1990s after a nuclear arms treaty with Russia.

Back in 1988, the Navy test-fired a Harpoon antiship missile that employed an early form of self-guidance. The missile mistook an Indian freighter that had strayed onto the test range for its target. The Harpoon, which did not have a warhead, hit the bridge of the freighter, killing a crew member.

Despite the accident, the Harpoon became a mainstay of naval armaments and remains in wide use.

In recent years, artificial intelligence has begun to supplant human decision-making in a variety of fields, such as high-speed stock trading and medical diagnostics, and even in self-driving cars. But technological advances in three particular areas have made self-governing weapons a real possibility.

New types of radar, laser and infrared sensors are helping missiles and drones better calculate their position and orientation. “Machine vision,” resembling that of humans, identifies patterns in images and helps weapons distinguish important targets. This nuanced sensory information can be quickly interpreted by sophisticated artificial intelligence systems, enabling a missile or drone to carry out its own analysis in flight. And computer hardware hosting it all has become relatively inexpensive — and expendable.

The missile tested off the coast of California, the Long Range Anti-Ship Missile, is under development by Lockheed Martin for the Air Force and Navy. It is intended to fly for hundreds of miles, maneuvering on its own to avoid radar, and out of radio contact with human controllers.

In a directive published in 2012, the Pentagon drew a line between semiautonomous weapons, whose targets are chosen by a human operator, and fully autonomous weapons that can hunt and engage targets without intervention.

Weapons of the future, the directive said, must be “designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”

The Pentagon nonetheless argues that the new antiship missile is only semiautonomous and that humans are sufficiently represented in its targeting and killing decisions. But officials at the Defense Advanced Research Projects Agency, which initially developed the missile, and Lockheed declined to comment on how the weapon decides on targets, saying the information is classified.

“It will be operating autonomously when it searches for the enemy fleet,” said Mark A. Gubrud, a physicist and a member of the International Committee for Robot Arms Control, and an early critic of so-called smart weapons. “This is pretty sophisticated stuff that I would call artificial intelligence outside human control.”

Paul Scharre, a weapons specialist now at the Center for a New American Security who led the working group that wrote the Pentagon directive, said, “It’s valid to ask if this crosses the line.”

Some arms-control specialists say that requiring only “appropriate” human control of these weapons is too vague, speeding the development of new targeting systems that automate killing.

Images from a computer showing a strike by a Brimstone missile, a British weapon, on an Islamic State armed truck in Iraq. The “fire and forget” missile can
distinguish among tanks and cars and buses without human assistance. Credit Ministry of Defense/Crown Copyright, via Associated Press

Mr. Heyns, of the United Nations, said that nations with advanced weapons should agree to limit their weapons systems to those with “meaningful” human control over the selection and attack of targets. “It must be similar to the role a commander has over his troops,” Mr. Heyns said.

Systems that permit humans to override the computer’s decisions may not meet that criterion, he added. Weapons that make their own decisions move so quickly that human overseers soon may not be able to keep up. Yet many of them are explicitly designed to permit human operators to step away from controls. Israel’s antiradar missile, the Harpy, loiters in the sky until an enemy radar is turned on. It then attacks and destroys the radar installation on its own.

Norway plans to equip its fleet of advanced jet fighters with the Joint Strike Missile, which can hunt, recognize and detect a target without human intervention. Opponents have called it a “killer robot.”

Military analysts like Mr. Scharre argue that automated weapons like these should be embraced because they may result in fewer mass killings and civilian casualties. Autonomous weapons, they say, do not commit war crimes.

On Sept. 16, 2011, for example, British warplanes fired two dozen Brimstone missiles at a group of Libyan tanks that were shelling civilians. Eight or more of the tanks were destroyed simultaneously, according to a military spokesman, saving the lives of many civilians.

It would have been difficult for human operators to coordinate the swarm of missiles with similar precision.
“Better, smarter weapons are good if they reduce civilian casualties or indiscriminate killing,” Mr. Scharre said. - NY Times.




TERMINATOR NOW: The Dawn Of The Google Machines - Boston Dynamics Release Video Of Its Human-Like Robot, Atlas, Performing Karate Moves; NASA Hands Over Operations Of Airfield To Google For Space Exploration And Robotics As The Movement Towards Singularitarianism Accelerates!

Still from youtube video (DRCihmcRobotics)

November 11, 2014 - TECHNOLOGY
- Engineers at Google-owned Boston Dynamics have released a new video of its human-like robot, Atlas, and the machine’s demonstrated ability to maintain a karate stance may someday earn it a black belt in martial arts. Boston Dynamics isn’t exactly building a ninja robot by any means, but a video released this week of Atlas mimicking the maneuvers made famous by Ralph Macchio in 1984’s blockbuster Karate Kid is quickly raising questions about what sort of capabilities the world can expect from the next generation of automated androids.

Dawn of the Google Machine - DARPA’s Atlas robot learns karate

The latest video of Atlas, released over the weekend by the robotics team at the Florida Institute for Human and Machine Cognition, is the latest example out of the lab to exhibit its 6’2”, 330 lbs mass of metal in action.

Unlike earlier examples, however, the latest YouTube clip of the robot, nicknamed “Ian,” shows the colossal creation balancing in a way that would be difficult for most anyone to execute, absent the utmost athletic ability.

WATCH: Atlas' Karate Kid.



As RT has reported previously, Boston Dynamics and DARPA, the Pentagon’s personal science lab of sorts, have helped supply Atlas models to institutions across the United States, including Florida’s IHMC, in hopes of seeing what the nation’s brightest robotics engineers are capable of when they port their own personalized software in the skin of the cyborg-like automaton.

And while select teams from coast to coast intend on perfecting Atlas in order to make it ideal for assisting with emergency situations and disaster reliefs in the future, IHMC engineers told the IEEE Spectrum that there isn’t all that much behind the unorthodox stance they’ve programmed Ian to adopt. According to the Spectrum’s Evan Ackermann, the IHMC team said they strived to have their robot emulate the iconic Karate Kid pose simply “For the fun and challenge of it.”

“Nobody is quite sure yet what robots are going to have to do in the [DARPA Robotics Challenge] Finals next year. But if part of the disaster scenario involves robots getting their legs swept by evil ninja robots (totally possible), IHMC’sAtlaswill be ready for that and more,” Ackermann wrote.

Additionally, he said the latest video is a vast improvement from what engineers unveiled last year when they publically operated the robot.

“We’re not actually expecting that Atlas will be jumping, but the balance that it’s demonstrating in this ‘Karate Kid’ video has us feeling a lot more optimistic about the DRC Finals, since in the DRC Trials, Atlas could literally be toppled by a gentle breeze,” Ackermann added.

On the IHMC website, the Atlas team says that their “focus on humanoid robots is rooted in a simple concept: Because the robots will be working in environments built for humans, a human-like robot is best-suited to the challenges involved.”

That isn’t to say Atlas is all that human, though. In addition to being made out stereo cameras sensors and a laser range finder, each model is made mobile by way of 28 hydraulically-actuated joints. - RT.


Google signs 60-year, $1 billion NASA lease... Projects involving aviation, space exploration, robots,...



Google has signed a long-term lease for part of a historic Navy air base, where it plans to renovate three massive hangars and use them for projects involving aviation, space exploration and robotics.

The giant Internet company will pay $1.16 billion in rent over 60 years for the property, which also includes a working air field, golf course and other buildings. The 1,000-acre site is part of the former Moffett Field Naval Air Station on the San Francisco Peninsula.

Google plans to invest more than $200 million to refurbish the hangars and add other improvements, including a museum or educational facility that will showcase the history of Moffett and Silicon Valley, according to a NASA statement. The agency said a Google subsidiary called Planetary Ventures LLC will use the hangars for "research, development, assembly and testing in the areas of space exploration, aviation, rover/robotics and other emerging technologies."

Google founders Larry Page and Sergey Brin have a well-known interest in aviation and space. The company has recently acquired several smaller firms that are working on satellite technology and robotics. But a Google spokesperson declined Monday to discuss specific plans for the property, which is located just a few miles from the company's main campus in Mountain View.

NASA plans to continue operating its Ames Research Center on the former Navy site. Google will take over operations at the runways and hangars, including a massive structure that was built to house dirigible-style Navy airships in the 1930s. NASA said the deal will save it $6.3 million in annual maintenance and operation costs.

Local officials praised Google's promise to restore the historic structure known as Hangar One, which is a San Francisco Bay Area landmark. U.S. Rep. Anna Eshoo, D-Palo Alto, called the lease agreement "a major win for our region."

Google already has a separate lease for another portion of the former air base, where it wants to build a second campus. Page and Brin have also used the Moffett runways for their collection of private jets, under another lease arrangement that's been criticized by some watchdog groups who say NASA gave the executives a sweetheart deal. - Yahoo.


Robot Brains Catch Humans in 25 Years, Then Speed Right On By

An android Repliee S1, produced by Japan's Osaka University professor Hiroshi Ishiguro, performing during a dress rehearsal of Franz Kafka's
"The Metamorphosis." Phototographer: Yoshikazu Tsuno/AFP via Getty Images

We’ve been wrong about these robots before.

Soon after modern computers evolved in the 1940s, futurists started predicting that in just a few decades machines would be as smart as humans. Every year, the prediction seems to get pushed back another year.

The consensus now is that it’s going to happen in ... you guessed it, just a few more decades.

There’s more reason to believe the predictions today. After research that’s produced everything from self-driving cars to Jeopardy!-winning supercomputers, scientists have a much better understanding of what they’re up against. And, perhaps, what we’re up against.

Nick Bostrom, director of the Future of Humanity Institute at Oxford University, lays out the best predictions of the artificial intelligence (AI) research community in his new book, “Superintelligence: Paths, Dangers, Strategies.” Here are the combined results of four surveys of AI researchers, including a poll of the most-cited scientists in the field, totalling 170 respondents.




Human-level machine intelligence is defined here as “one that can carry out most human professions at least as well as a typical human.”

By that definition, maybe we shouldn’t be so surprised about these predictions. Robots and algorithms are already squeezing the edges of our global workforce. Jobs with routine tasks are getting digitized: farmers, telemarketers, stock traders, loan officers, lawyers, journalists -- all of these professions have already felt the cold steel nudge of our new automated colleagues.

Replication of routine isn't the kind of intelligence Bostrom is interested in. He’s talking about an intelligence with intuition and logic, one that can learn, deal with uncertainty and sense the world around it. The most interesting thing about reaching human-level intelligence isn’t the achievement itself, says Bostrom; it’s what comes next. Once machines can reason and improve themselves, the skynet is the limit.

Computers are improving at an exponential rate. In many areas -- chess, for example -- machine skill is already superhuman. In others -- reason, emotional intelligence -- there’s still a long way to go. Whether human-level general intelligence is reached in 15 years or 150, it’s likely to be a little-observed mile marker on the road toward superintelligence.

Superintelligence: one that “greatly exceeds the cognitive performance of humans in virtually all domains of interest.”



Inventor and Tesla CEO Elon Musk warns that superintelligent machines are possibly the greatest existential threat to humanity. He says the investments he's made in artificial-intelligence companies are primarily to keep an eye on where the field is headed.

“Hope we’re not just the biological boot loader for digital superintelligence,” Musk Tweeted in August.

“Unfortunately, that is increasingly probable.”

There are lots of caveats before we prepare to hand the keys to our earthly kingdom over to robot offspring. First, humans have a terrible track record of predicting the future. Second, people are notoriously optimistic when forecasting the future of their own industries. Third, it’s not a given that technology will continue to advance along its current trajectory, or even with its current aims.

Still, the brightest minds devoted to this evolving technology are predicting the end of human intellectual supremacy by midcentury. That should be enough to give everyone pause. The direction of technology may be inevitable, but the care with which we approach it is not.

“Success in creating AI would be the biggest event in human history,” wrote theoretical physicist Stephen Hawking, in an Independent column in May. “It might also be the last.” - Bloomberg.



DARPA eyes converting large aircraft into drone carriers

A Boeing B-52 Stratofortress strategic bomber (Reuters / Tim Chong)

The Pentagon’s Defense Advanced Research Projects Agency (DARPA) has launched a contest to find the best solution for large airplanes, such as C-130 transport planes, to carry small drones.

The agency has recently placed a “request for information” in order to explore the possibilities of launching swarms of small UAVs from already existing large aircraft.

“Small unmanned aircraft systems (UAS) have limited range and responsiveness, however, compared to larger airborne platforms,”
DARPA stated. “Launching and recovering small UAS from those larger platforms could provide a cost-effective capability over a spectrum of operating environments to greatly extend the range of UAS operations, as well as enable an entirely new operational concept for mission sets that benefit from distributed employment.”

The Defense Advanced Research Projects Agency is interested in the range of drones that would be able to carry out intelligence and military missions, thus limiting the risks that US pilots might otherwise take, DARPA officials said on Sunday, according to the Washington Post.

“We want to find ways to make smaller aircraft more effective, and one promising idea is enabling existing large aircraft, with minimal modification, to become ‘aircraft carriers in the sky,’”
said Dan Patt, a DARPA program manager.

“We envision innovative launch and recovery concepts for new [unmanned aerial systems] designs that would couple with recent advances in small payload design and collaborative technologies.”


DARPA is likely to use planes like the B-52 Stratofortress bomber, B-1B Lancer bomber or C-130 Hercules cargo plane.

According to DARPA’s statement, organizations or individuals that would like to participate in the project must submit ideas by November 26, and their concepts should be realizable within four years. They have to include “system-level conceptual designs” and “feasibility analysis.”

Perhaps Hollywood could pitch some ideas of its own:

WATCH: Scene from the movie Skyline.



WATCH: Scene from the movie The Avengers.




- RT.


"An entire platoon wearing wearable robots"

The military's Defense Advanced Research Projects Agency (DARPA) lab creates stunning inventions that could help our service members stay one step ahead when answering the call of duty.

Those creative and skillful minds gave "CBS This Morning" a sneak peek at technology you may have thought only existed in your dreams, reports CBS News correspondent Chip Reid.

When Jason Kerestes goes for a run, he gets a boost from a strange contraption he wears on his back. He calls it "Airlegs."

"It basically makes you feel like you have bigger muscles," Kerestes said.

Kerestes, a graduate student at Arizona State University, and Professor Tom Sugar, are developing the device for the Pentagon.

The power comes from a tank of compressed air which is connected by pulleys and electronic sensors to braces on the knees.

"We fire air and we pull up on the person's leg to give them assistance at the right time and then this goes back down and back up," Kerestes said. "It's helping you lift your leg so that it will help you run up stairs, it will help you run faster."

At this early stage it reduces the load by 10 percent. The goal is 25 percent, which they said will allow the average soldier or Marine to run a mile in four minutes.

"We do envision an entire platoon wearing wearable robots," Sugar said. "These robots will assist them while carrying 100 pound backpacks."

It's one of hundreds of projects at universities and companies across the country funded by DARPA -- an agency known as the Pentagon's team of mad scientists, that's a little like herding cats.

WATCH: "An entire platoon wearing wearable robots"






"Actually, if they're great scientists and engineers, that's exactly what it's like," DARPA director Arati Prabhakar said. "Because you want the people that have immense creativity and are off chasing great ideas."
DARPA was created in 1958 in response to the earth-shaking 1957 launch of Sputnik by the Soviet Union.

"It was a huge wake-up call for the United States," Prabhakar said. "Our core mission is breakthrough technologies for national security."

For example, DARPA was behind some of the key, early research on stealth technology, allowing a U.S. aircraft to evade enemy radar.

But many of DARPA's brainstorms have had an enormous impact well beyond the military.

"Forty-five years ago DARPA did this crazy experiment which was to hook a couple of computers together and have them talk to each other," Prabhakar explained. "That was the beginning of the ARPAnet which became the internet today."

DARPA has played a vital role in hundreds of technologies ranging from sophisticated prosthetic limbs for wounded warriors to GPS. They even developed many of the components in today's smart phones, including SIRI.

Some of DARPA's wildest ideas come from nature -- like their research on mini-robots.

Just as armies of ants work together to accomplish amazing things, DARPA hopes to create armies of mini-robots for micro-manufacturing.

The gecko also caught DARPA's attention because of its ability to climb walls.

"It looks like he is hanging on with ten toes but when you zoom in what you find is there are about half a billion points of contact," Prabhakar said.

So DARPA created a gecko-like material that easily supports the weight of humans.

In a never-before seen video, a Special Forces soldier uses it to climb straight up a wall -- a technology that could one day be used in hostage rescue missions.

The inventors of the "Airlegs" device hope it will not only help the military, but also one day give people with disabilities greater mobility. - CBS News.


Robot courier delivers at Columbus medical center

Although the RoboCourier autonomous mobile robot in use at Midtown Medical Center arrived in September, Darrell Demeritt is still surprised by the machine.
"I will turn around and there it is," said the senior director of laboratory services for Columbus Regional Health.

The robot that delivers specimens by itself on the hospital's third floor might even address him with one of close to 50 phrases it has been programmed to say, such as "How is it going?", "When do I get a day off?" or "I can be bribed with electrons."

The more appropriate kinds of message it makes pertaining to its job are "I have a delivery for pathology" or "I am here to pick up for chemistry."

The $38,000 robot was purchased entirely through donations to the Columbus Regional Health Foundation.
Demeritt said it was money well spent.

"We are highly automated here. Robots can do all kinds of things, This is about transportation," Demeritt said.

The robot can deliver specimens, pharmacy supplies, surgical equipment and other items using an open container.

Demeritt said lab workers spend far less time walking and waiting because of the robot. They can stay focused and that means specimens and tests can be analyzed sooner which turns into faster results and better patient care.

"It is pretty cool," he said. - NewsOK.



Have This Artificially Intelligent Travel Agent Book Your Next Vacation

Fly without the hassles of travel sites (Wikimedia)
Just when we were all convinced that travel agents had become obsolete did we find out they’ve actually just gotten a hi-tech revamp for the next century.

A link on Reddit tipped us off this morning to Dobby, a service claiming to be an artificially intelligent travel agent that will take care of all of your flight accommodations for you. Simply send Dobby an email telling him where and when you want to go, and he’ll reply with three itineraries in five minutes or less. All that’s left for you to do is choose.

According to the service’s minimalist webpage, Dobby is currently in the “early access” stage. It’s unclear what company is behind this futuristic travel agent or when it will officially launch, but it seems that Dobby’s artificial intelligence might be more helpful than Expedia or Travelocity.

According to the site, he learns users’ travel preferences and habits over time and uses them to build a personalized travel portfolio for each user. He also connects with all airlines, is equipped to handle group trips and can even keep track of frequent flyer miles in order to get users the best deals for their schedules.

After you book, Dobby will automatically generate an expense report for your booking.

Currently, Dobby is only booking flights, but in the future he’ll book hotel and car accommodations as well.
The site indicates that users are charged a booking fee based on the frequency of their reservations, but exactly how much the service costs is unclear. Since booking sites like Expedia already help users find cheap flights, and don’t cost anything to use, it’s safe to say Dobby’s artificial intelligence better be pretty impressive, to justify whatever fee it ends up charging users.

This is the second time we’ve seen tech get disrupted in the past week. Last Friday saw the launch of Connections, a service that offers unused plane tickets to strangers with the same names as the people who originally booked them.

Now if only someone could make an app that could make flight delays go away. - Beta Beat.






Tuesday, November 4, 2014

ARTIFICIAL INTELLIGENCE: The Rise Of The Machines - Microchip Breakthrough Enables Emotional Response In Robots, Electronic Devices; Artificial Intelligence Outperforms Average High School Senior; DARPA-Funded Researchers Test Drone That Can Learn!


November 4, 2014 - TECHNOLOGY
- Years ago, Ray Kurzweil popularized the Teminator-like moment he called the 'singularity', when artificial intelligence overtakes human thinking. Nowadays, Kurweil's vision and the quest for "conscious" sentient robots, seems to be just around the corner, if you go by the following reports:

Microchip breakthrough enables emotional response in AI robots and consumer electronic devices

EMOSHAPE (www.emospark.com) has announced the launch of a major technology breakthrough with an EPU (emotional processing unit); a patent pending technology which creates a synthesised emotional response in machines. This represents a significant advancement in the field of artificial intelligence devices and technologies.

Based on the 8 primary emotions identified by Robert Plutchik's psycho-evolutionary theory, the ground-breaking EPU algorithms effectively enable machines to respond to stimuli in line with one of the 8 primary emotions - anger, fear, sadness, disgust, surprise, anticipation, trust, and joy. This video demonstrates how the EPU empowers machines with empathy: http://youtu.be/nMRjSijBsLc

This is the first time that the science and technology industry has empowered machines to respond with human emotions, which is set to deliver a yet undiscovered level of user experience between people and emotionally enabled technology.

Patrick Levy Rosenthal CEO of Emoshape said: “How can any inanimate object interact with humans and learn how to please them without empathy? The EPU advancement represents a step change for the future of technological goods such as smartphones, computers, toys, medicine, finance and robotics.”

The US and London-based company is now set to launch the production of their first A.I. home console in time for Christmas 2014. The company is also seeking private investment to help roll out mass production of the EmoSPARK cube. Emoshape has now opened up its capital to investors via fundable.com (http://fundable.com/emoshape-lic).

The EmoSPARK cube can fit in the palm of the hand with a purpose of being a digital friend. It monitors a person’s facial expressions and emotions by capturing images through an external camera. The images are then processed until the cube can recognise who the person using it is, and their relationship to others. It will monitor users’ responses to the world around them with a focus on the user’s reaction to music.

EmoSPARK will learn what is liked and what isn’t liked by using seven emotions to create a personal map of the user’s personality. It can track joy, sadness, disgust, fear, anger, trust, anticipation and surprise.

About Emoshape


Founded in 2014 and privately funded, Emoshape is dedicated to providing powerful and easy-to-use emotional technologies. Emoshape is a company associated with evolutionary technology that will realise people’s dreams. - Herald Online.

WATCH: Foretelling/Predictive Programming - "Chappie" Trailer starring Hugh Jackman.



Artificial Intelligence Outperforms Average High School Senior

Artificial intelligence in Japan is getting closer to entering college. AI software scored higher on the English section of Japan’s standardized college entrance test than the average Japanese high school senior, its developers said.

The software, known as To-Robo
, almost doubled its score on a multiple choice test from its performance a year ago, indicating progress toward a goal set by its developers to eventually pass the entrance exam for Tokyo University, Japan’s most prestigious college.

“The average score for the English section of the standardized entrance exam was 93.1 (out of 200), but the AI scored 95,” a spokesman for NTT Science and Core Technology Laboratory Group said. Last year the software scored 52.

The NTT lab is developing the software alongside the National Institute of Informatics, and is in charge of developing the software’s English capabilities. The project began in 2011 with a 10-year time frame for reaching its goal.

Questions from the test were turned into data that could be recognized by the software. To-Robo then processed the information, distinguishing the logic of exchanges and correctly identifying the right answer out of multiple choices.

For example, it was able to correctly choose the answer that best fits the following conversation:

A: I hear your father is in the hospital.
B: Yes, and he has to have an operation next week.
A: 【  】. Let me know if I can do anything.
B: Thanks a lot.
–Exactly, Yes.
–No problem.
–That’s a relief.
–That’s too bad.

While NTT lab said To-Robo was getting better at completing conversations, structuring sentences appropriately and grasping the context of a dialogue, it added that the software still needs to improve at understanding more complex exchanges and comprehending the emotions of speakers.

The technology may be developed for human use in the future, the lab said, with translation seen among its possible applications. - WSJ.


DARPA-Funded Researchers Have Tested a Drone That Can Learn


Almost seven years ago, we learned that DARPA was investing millions of dollars in neuromorphic chips. That's a fancy term for a computer chip that mimics a biological cortex—a brain chip. Today, researchers are getting closer. And of course, they're putting those brain chips in drones.

Responding to DARPA's challenge, HRL Laboratories' Center for Neural and Emergent Systems just tested a tiny drone with a prototype neuromorphic chip. The drone packs 576 silicon neurons that communicate through spikes in electricity and respond to data from optical, ultrasound, and infrared sensors. And thanks to that brain-like chip, the little robot doesn't necessarily need a human to tell it what to do. It can learn and act on its own.

It sounds like something out of a science fiction movie, a tiny aircraft that flies around deciding what to surveil or, more frighteningly, what to shoot. MIT's Technology Review explains how the test worked:
The first time the drone was flown into each room, the unique pattern of incoming sensor data from the walls, furniture, and other objects caused a pattern of electrical activity in the neurons that the chip had never experienced before. That triggered it to report that it was in a new space, and also caused the ways its neurons connected to one another to change, in a crude mimic of learning in a real brain. Those changes meant that next time the craft entered the same room, it recognized it and signaled as such.
So that's pretty cool. No seriously, that kind of technological prowess is nothing short of astonishing. However, it's hard to deny that a future full of drones with tiny electronic brians is a little bit frightening. They'll surely do lots of good. But that conversation about the ethics of artificial intelligence will only escalate as AI takes to the skies. - Gizmodo.


Tuesday, June 24, 2014

BIG BROTHER NOW: The Rise Of Global Police State - Seattle Woman Sees Drone Peeping Into Her Apartment Window; Chicago Lamp Posts To Be Fitted With Data-Collection Sensors; Police Departments To Get X-Ray Scanner For Vehicle Inspections & “Public Safety”; Facial Recognition Technology Used To Spot Genetic Disorders!

June 24, 2014 - POLICE STATE - Welcome to a world where everything you do is scrutinized, collected, stored, and  analyzed.

Seattle Woman Sees Drone Peeping Into Her Apartment Window
File photo of small drone (Photo by:PIERRE ANDRIEU/AFP/Getty Images)

Seattle Police are investigating a report of a drone peeping into a woman’s apartment window.

Police were called to the downtown Seattle apartment complex on Sunday morning after she spied an unmanned aerial vehicle hovering outside the building. The woman said she was concerned the drone was looking into her apartment.

After calling police, an employee of her apartment building says he went outside and saw two men piloting the drone. They packed up their gear, which included a video camera, and drove off before police arrived. Authorities say they are checking for surveillance video that may help identify the men.
Drones and what role they should play in society have been a hot item in Seattle for quite some time. Last year, former Seattle Mayor Mike McGinn ordered the Seattle Police Department to abandon its plan to use drones after an uproar from citizens and privacy advocates. - CBS.


Chicago Lamp Posts To Be Fitted With Data-Collection Sensors
The inner workings of the data collection box. Nancy Stone, Chicago Tribune

The curled metal fixtures set to go up on a handful of Michigan Avenue light poles later this summer may look like delicate pieces of sculpture, but researchers say they'll provide a big step forward in the way Chicago understands itself by observing the city's people and surroundings.

The smooth, perforated sheaths of metal are decorative, but their job is to protect and conceal a system of data-collection sensors that will measure air quality, light intensity, sound volume, heat, precipitation and wind. The sensors will also count people by measuring wireless signals on mobile devices.

Some experts caution that efforts like the one launching here to collect data from people and their surroundings pose concerns of a Big Brother intrusion into personal privacy.

In particular, sensors collecting cellphone data make privacy proponents nervous. But computer scientist Charlie Catlett said the planners have taken precautions to design their sensors to observe mobile devices and count contact with the signal rather than record the digital address of each device.

Researchers have dubbed their effort the "Array of Things" project. Gathering and publishing such a broad swath of data will give scientists the tools to make Chicago a safer, more efficient and cleaner place to live, said Catlett, director of the Urban Center for Computation and Data, part of a joint initiative between the University of Chicago and Argonne National Laboratory, near Lemont.

The novelty of a permanent data collection infrastructure may also give Chicago a competitive advantage in attracting technological research, researchers contend.

"The city is interested in making Chicago a place where innovation happens," said Catlett.

Many cities around the globe have tried in recent years to collect enormous piles of "big data" in order to better understand their people and surroundings, but scientists say Chicago's project to create a permanent data collection infrastructure is unusual.

Data-hungry researchers are unabashedly enthusiastic about the project, but some experts said that the system's flexibility and planned partnerships with industry beg to be closely monitored. Questions include whether the sensors are gathering too much personal information about people who may be passing by without giving a second thought to the amount of data that their movements — and the signals from their smartphones — may be giving off.

The first sensor could be in place by mid-July. Researchers hope to start with sensors at eight Michigan Avenue intersections, followed by dozens more around the Loop by year's end and hundreds more across the city in years to come as the project expands into neighborhoods, Catlett said.

"Our intention is to understand cities better," Catlett said. "Part of the goal is to make these things essentially a public utility."

Over the last decade many cities have launched efforts to collect data about everything from air quality and temperature at street level to the traffic flow of pedestrians and vehicles, all in the name of making urban centers run more efficiently and safely.


WATCH: New sensors will scoop up 'big data' on Chicago.





Much of the useful data has been "exhaust" from an increasingly digital and technological world, scientists say. Improvements in such technologies have led to novel conveniences like smartphone applications that tell you whether your bus is on time or how backed up the expressway is likely to be when you head home.

But Chicago researchers are hoping to put in place a system that will make this city a leader in research about how modern cities function, Catlett said.

The decision to move forward with the system has unfolded without much attention outside the technology community. Mayor Rahm Emanuel, who rarely misses a chance to push Chicago as an emerging digital hub, has yet to tout the project publicly.

City officials don't have firm expectations about what the data may yield but share researchers' desire to push "Chicago as a test bed of urban analytical research," said Brenna Berman, the city's commissioner of information and technology. "Part of why this is so exciting is a lot of the analytics we do is targeted to a specific problem, and this is more general."

Berman said the investment from the city will be minimal: Between $215 and $425 in city electrician wages to install each box and then an estimated $15 a year for electricity to power each box.

Berman's office had a say in picking the initial sensor lineup, and she said the list was limited to "nonpersonal" data because the city is still working on a privacy and security policy to govern the protection and confidentiality of any data that the system may collect in the future. Berman expects she and Emanuel will agree on a final version of the document by the end of July.

"We've been extremely sensitive to the security and the privacy of residents' data," Berman said.

The city will have the last say on what kind of personal data is gathered by the system, "because they're installed on city property," Berman said.

"Nothing else can be deployed without the city's say-so," she said.

The benefits of collecting and analyzing giant sets of data from cities are somewhat speculative, but there is a growing desire from academic and industrial researchers to have access to the data, said Gary King, director of the Institute for Quantitative Social Science at Harvard University.

"You really don't know until you look," King said.

Although he said he was unfamiliar with the project in Chicago, King likened such projects to the early efforts of looking into deep space with the Hubble Space Telescope, opening new and unknown frontiers of information, "only the telescope is pointed downward" at life on the streets of the city.

While the project is led by Catlett's team and the city, other institutions are involved, he said. The boxes that will hold the sensors are being made by designers at the School of the Art Institute, and Catlett said he has secured more than $1 million in in-kind contributions of engineering help from corporations including Cisco Systems, Intel, Zebra Technologies, Qualcomm, Motorola Solutions and Schneider Electric.

Planners envision a permanent system of data collection boxes that can be used by a range of researchers from the public, private and academic sectors who want to test ideas but wouldn't have the resources to build the testing infrastructure. The system also will be flexible with the boxes being secure, and connected to power and the Internet, but otherwise adaptable to "the latest and greatest technology" in sensors to meet the as-yet-unknown needs of academic and industrial researchers, Catlett said.

While there are plenty of advocates singing the praises of the city's push toward gathering and publishing data, some experts say there are risks of invading the privacy of people who don't know their every movement in public is being observed by a computer and analyzed by someone.

Catlett said the project is designed to keep the kinds of data collected in anonymous forms.

"We don't collect things that can identify people. There are no cameras or recording devices," he said. Sensors will be collecting "sound levels but not recording actual sound. The only imaging will be infrared," rather than video, he said.

But such an effort could still lead to gathering more sensitive information than is intended, said Fred Cate, an expert on privacy matters related to technology who teaches at Indiana University's law school.

"Almost any data that starts with an individual is going to be identifiable," Cate said. When tracking activity from mobile phones, "you actually collect the traffic. You may not care about the fact that it's personally identifiable. It's still going to be personally identifiable."

King, the Harvard sociologist and data expert, agreed that the Chicago scientists will inevitably scoop up personally identifiable data.

"If they do a good job they'll collect identifiable data. You can (gather) identifiable data with remarkably little information," King said. "You have to be careful. Good things can produce bad things."

Officials need to plan for "the natural tendency that economics play," said Cate, the privacy expert. "If you spend a million dollars wiring these boxes, and a company comes in and says 'We'll pay you a million dollars to collect personally identifiable information,' what's the oversight over those companies?"

Decisions about whether the city should allow the system to be used by industry to study people in a way that could identify them is "ultimately one for voters who have to pay more in taxes," King said. "It's a public policy question."

Catlett said the Chicago project's planning has consciously addressed such concerns. Data collection projects have sometimes harbored a false sense of security because their methods save mobile devices' addresses without having a means to "look up" the addresses and connect them to owners, he said.

"However, the danger associated with saving such apparently anonymous data is that it might later be combined with other data sources such that the information can be pieced together to determine identity," Catlett said. "For this reason, we made the decision that the (sensors) will not save address data, and will only count nearby devices."

The sensors will measure foot traffic by counting the number of Wi-Fi- or Bluetooth-enabled devices in range, Catlett said, similar to the way a Wi-Fi router in a coffee shop is aware of all Wi-Fi-enabled devices in its range.

The sensors will broadcast a request every 15 to 60 seconds, requesting nearby devices to respond, Catlett said. The number of responses will be counted and saved, but the software will not collect or save the address, he said.

Catlett said the fact that all of the data collected will immediately be published will also expose the project to ongoing scrutiny.

Personal data has already been exposed to use by others for years, experts noted. Whether it's use of data from public utility accounts or images from Chicago's massive system of surveillance cameras, traditional notions of privacy are changing and eroding, experts said. And when it comes to private companies seeking your data for commercial reasons, there is often a limit to their intrusion.

"Most companies don't care about you, they care about people like you," King said. - Chicago Tribune.



Police to Get X-Ray Scanner For Vehicle Inspections & “Public Safety”
A new portable backscatter device designed to perform x-rays of objects is set to be used by police departments to inspect vehicles as well as for “public safety,” according to the company behind the new scanner.


WATCH: AS&E - MIN Z.



The video for the handheld MINI Z Backscatter imaging scanner, developed by American Science and Engineering Inc (AS&E), brags that it will “allow operators to see more than ever in more places than ever.” The scanner will be used by “law enforcement, first responders, border control, event security, maritime police and general aviation security,” in order to search for currency, drugs and explosives. Police will use the device to inspect “vehicle bumpers, tires, panels and interiors” and to detect IEDs.

According to AS&E, the scanner represents a “game changer” for law enforcement and border patrol and will be used to ensure “public safety.” However, the company admits that the device “is not designed to scan people” because it emits radiation.

The technology is based on a previous larger incarnation of x-ray scanner that was deployed via trucks to conduct roving scans of other vehicles on American streets and highways.

In 2010 it emerged that American Science & Engineering had sold many of the larger devices to U.S. law enforcement agencies, who were already using them on the streets for “security” purposes.

The company’s founder, Joe Reiss, told Forbes that more than 500 backscatter x-ray devices were already being used domestically by U.S. authorities and were being, “driven past neighboring vehicles to see their contents.”

Commenting on the roving x-ray vans, EPIC’s Marc Rotenberg warned, “Without a warrant, the government doesn’t have a right to peer beneath your clothes without probable cause. Even airport scans are typically used only as a secondary security measure. If the scans can only be used in exceptional cases in airports, the idea that they can be used routinely on city streets is a very hard argument to make.”

We previously noted how the ultimate end use of body scanners would not be limited to airports, and that they were going to be rolled out on the streets as mobile units that would scan vehicles at checkpoints as well as individuals and crowds attending public events.

Dutch police later announced that they were developing a mobile scanner that would “see through people’s clothing and look for concealed weapons” and that it would be used “as an alternative to random body searches in high risk areas”.

The device would also be used from a distance on groups of people “and mass scans on crowds at events such as football matches.”

The plans mirrored leaked documents out of the UK Home Office three years prior, which revealed that authorities in the UK were working on proposals to fit lamp posts with CCTV cameras that would X-ray scan passers-by and “undress them” in order to “trap terror suspects”. - Info Wars.



Facial Recognition Technology Used To Spot Genetic Disorders
New software will be able to track changes to patients’ features using thousands of photographs

New technology could help doctors to diagnose rare genetic disorders through face-recognition software similar to that used in modern handheld cameras.


Between 30 and 40 per cent of genetic disorders – including Down’s syndrome and the rare Angelman syndrome – involve some kind of change to the face or skull.

The new software is based on studies of thousands of pictures of previously diagnosed patients, and is able to “learn” what facial features to look for and which to ignore when suggesting a diagnosis.

It will also be able to group together patients with unknown disorders who have similar facial features and skull structures – potentially enabling doctors to identify new disorders, and the DNA variations that cause them.

The software has been developed at Oxford University, in a successful collaboration between medical researchers and the university’s Department of Engineering Science.

Using the latest in computer vision technology, the software will “learn” from a growing bank of patient photographs from public and clinical databases. So far, the database extends to nearly 3,000 patients.

While genetic disorders are each individually rare, collectively conditions which may involve some change to face or skull affect one person in 17.

The researchers even used an image of Abraham Lincoln, who is thought to have had a rare condition called Marfan syndrome, characterised by long limbs and fingers, as an example of how the machine could help diagnose the syndrome.

Out of 90 possible disorders, Marfan syndrome emerged as among the 10 most likely when Lincoln’s pi cure was analysed.

The new technology is not intended to replace traditional diagnosis, but to assist it, and in some cases improve diagnosis where in parts of the world local clinicians may lack the required expertise.

Dr Christoffer Nellaker, of the Medical Research Foundation’s Genomics Unit at Oxford, said that diagnosis of a rare genetic disorder was an important step forward for doctors and patients.

“A doctor should in future, anywhere in the world, be able to take a smartphone picture of a patient and run the computer analysis to quickly find out which genetic disorder the person might have,” he said.

“This objective approach could help narrow the possible diagnoses, make comparisons easier and allow doctors to come to a conclusion with more certainty.”

The technology was developed in close collaboration with Professor Andrew Zisserman, of Oxford’s Department of Engineering Science, and the research is published today in the eLife journal.

Like Google, Picasa and other photo software, it recognises variations in lighting, image quality, background, pose, facial expression and identity. It builds a description of the face structure by identifying corners of eyes, nose, mouth and other features, and compares this against what it has learnt from other photographs fed into the system. - Independent.