"It was terribly dangerous to let your thoughts wander when you were in any public place or within range of a telescreen. The smallest thing could give you away. A nervous tic, an unconscious look of anxiety, a habit of muttering to yourself--anything that carried with it the suggestion of abnormality, of having something to hide. In any case, to wear an improper expression on your face...; was itself a punishable offense. There was even a word for it in Newspeak: facecrime..."
Vandalising cameras is not (IMO) a valid response.
These cameras are put up with the intention of protecting life / liberty / property. Perfectly valid and laudable aims in a democracy.
The problem is that the uses of the systems can become subverted and spiral downwards, through lack of controls and oversight.
As an example imagine cameras and software to identify sudden violent actions in a street, flag the incident for review, follow all the involved people as they walk down different streets switching cameras intelligently.
That seems a good thing.
Bad thing: not knowing you were being monitored last night and upon review nothing violent actually happened.
Badder thing: not knowing you are being monitored as part of a skin-color recognition innovation.
Badderer things: oh lots, but all about technologies being used outside of strict, "just cause" reasons.
The existence of the cameras is not the problem. It's how we use the output of them. Vandalising cameras without putting in place (legislative) checks around the use of surveillance devices is short sighted Luddism - so not a valid response
That's like saying "Guns don't kill people, people kill people." even though death by speeding bullet is more prevalent where guns are legal. Guns and cameras should be used only where a credible strategical benefit exists. Right now, they're often part of a scare tactic and thus oppresses the people confronted by them. One has to ask whether their output is relevant at all before questioning the usage of said output.
So I think you are suggesting imposing a ban on cameras like here in UK bans guns. That is one approach and maybe effective however I feel that there are public good to be derived from surveillance tech, that if the govt has cameras to enable those benefits we still need to manage that and the difficulties of enforcing such a ban for civilian usesuggest to me that mitigating the
existence of cameras through open access is simplest approach
I wasn't even suggesting guns should be banned everywhere (although they should), let alone cameras. Open access doesn't mitigate the problem, it'd merely replaces one problem with another. Similarly, publicising requests for footage could be detrimental to law enforcement agencies who are trying to help protect fellow citizens.
Since crime will always exist and criminals will always one-up law enforcement, it's in our best interests to always use the least possible countermeasures. Time will tell when we've reached 1984, society seems to drift closer and closer — albeit, ever so slowly.
Yup, because this time it is facial recognition technology. What if it is mind-reading technology in 20 years time? If you don't draw the line in cement instead of sand at some point such as now with facial recognition technology, it will be redrawn over and over and over again.
Given that several the hacker community here familiar with dystopian future literature accepts this as inevitable in many comments, suggests that a slippery slope argument is not in the least bit far fetched.
If here there are people are accepting, then in other areas of society there are people actively arguing in favor of using facial recognition technology everywhere. Those same people will probably be arguing for equivalent of telescreens once something akin to them are invented.
There exist algorithms for estimating emotion from facial video. The accuracy is kind of hit or miss, but it's enough for "he's experiencing anxiety during the security search, he must be a terrorist."
Once you can download an isotope separator or a plague kit, privacy concerns become moot. The societies not running under total AI supervision will simply go extinct. So the question is what kind of supervised society do you want?
I think this is a good argument for requiring all software purchased, and all technology developed, with public funds to be public domain and/or open source.
A lot of people will believe that we should just kill the project and move on, but honestly, the tech is there, and we have all seen what happens with tech: it just keeps going. The functionality will be snuck in as an add on to some other software package anyway.
This isn't to say we shouldn't kill the project -- just that this should be required as open. It won't undo any security/benefit gained from it, much like open crypto. It will however give us the voters the ability to actually provide input about what can/can't be done by the government.
You want to open source a $1B piece of facial recognition software so that _more_ agencies / NGOs / companies / individuals can track our movements? That sounds a little counter productive. At least the FBI (presumably) has accountability and oversight. Can you imagine trying to police abuses if deployed to every municipal law force and marketing agency in the country?
But if it was open source and we knew exactly what it was capable of it would be easier to pass legislature that restricts it and makes certain uses of it illegal.
This stuff is coming one way or another and wouldn't you rather it be properly legislated? And that we as the public could know their capabilities?
Let's apply your comment to a law against murder. Doesn't have much relevance does it? Your comment isn't germane.
Laws always apply to the people to whom they are written to apply to. It is true that some people don't follow some laws. Presumably violating the law carries some penalty if caught. This does not affect whether or not said law is efficacious or needed.
re police abuse. Facial tracking goes both ways. The authority will, inevitably, acquire the tools of oppression. One of the few defenses is to have the same tools. In fact, being able to id and track the "watchers" is more important/freedom sustaining.
Also, science (as in allowing the code to be studied, learned from, spur further research and discovery) trumps paranoia over "big government".
The cat is already out of the bag. Facebook does mass facial recognition.
Other facial recognition code is already available in open source. The difference is the current open code is not designed to work with a massive database.
What is really stopping other agencies and LEOs from deploying facial recognition is they don't have the implementation budget or the skill to properly scope the project.
Here's a scene from Minority Report with an imagined advertising landscape after automatic recognition becomes common. The assumption was that retinal scans would be used instead of facial recognition, but the imagined scenario is apt.
It's even more apropos when we consider how many people in the future may be wearing Google Glass-like augmented reality devices. In Minority Report, you might see someone else's advertisements, but if they are projected directly on your viewscreen (or retina, etc) then advertising and propaganda gets even more creepy, as it's more private.
A plausible version of such a dystopian vision is one where your new glasses get infected with malware that spams you with advertisements, which I believe I read about in a novel by Gibson. A more frightening one would be one in which what you see is based on propaganda.
Speaking of retinal scanners, I've heard of a high-res scanning camera that can acquire a retina (or maybe it was iris) every ~1.2s IIRC, from people just wandering around. I'd like one in my home, but not in public.
I agree in some sense. This technology as a law enforcement tool is inevitable. I can't see a future in which every national government doesn't use it on some level.
So making it open and promoting discussion about its limits is probably the way to go.
It also makes it remarkably easy to pick out the contractors who simply aren't up to par and are milking the government of money. Every software engineer can review their code and call out those of unacceptable quality and make sure they never get awarded another contract.
I think the 21st century trick for this is to "privatize" it. Public funds go to a trusted private partner who can operate in the dark. Then it seems like the billions are spent and gone by the time the questions are asked and the damage is done.
It's not as simple as github.com/usgov but some of what you're looking for is certainly there. If the government has rights to the source code then it can fall under FOIA. They have ways of denying those requests but only with the proper documentation.
We're at a unique time in the history of surveillance: the cameras are everywhere, and we can still see them. Fifteen years ago, they weren't everywhere. Fifteen years from now, they'll be so small we won't be able to see them. Similarly, all the debates we've had about national ID cards will become moot as soon as these surveillance technologies are able to recognize us without us even knowing it.
I think we will get a subculture or fashion for complicated facial jewelry or weird makeup patterns to confuse facial recognition algorithms. If cameras and facial recognition technology becomes as ubiquitous as some people are afraid of, I'm guessing that punk will rise again and take a principled stand against authority and privacy intrusions.
We are never going to stop the technology being developed and used. And nor should we - the benefits can be enormous.
But we must put in place legislation that prevents abuse - and I think it needs to be broad and overarching.
I suggest
1. All monitoring of individuals / crowds in public
and semi-public areas must be clearly notified, and
the raw output of that monitoring must be made
publically available within a short and reasonable time
frame
2. All access to that raw feed and subsequent
processed feeds by government officials must be
audited and made available in raw form
3. Exceptions can only be made through warrants
signed by civil courts
Frankly, there are face recognition, laser scanners and more. Getting away with a crime in a public area will be almost impossible in 20 years. That's the good news. The bad news is that if only the police have access to that information, they will abuse it. Because they are good people mostly.
Afaik the DPA is aimed at processing of data, specifically excludes that collected for security warranted or not, has no teeth to it, has no provision to force the publication of data held as a matter of course (opt in not opt out) and has afaik no case law regarding surveillance (it's mostly focused on internal data processing)
we should assume that if I can see a camera I can go to surveillance.gov.UK enter my lat long, choose the camera from a list and see the raw feed - wave at it. And see who has acessed that feed in the past hour.
If that os not a normal reflex for people waiting to catch thier train, or walking to the shops, then we have not got the right legislation
>>In addition to scanning mugshots for a match, FBI officials have indicated that they are keen to track a suspect by picking out their face in a crowd.
Reminds me of that one scene from Minority Report where Tom Cruise boards a tram, only to have the cameras scan and identify his face and alert the authorities.
I give it a year or 3 before the government issues a secret order to Facebook to hand over all photos and photo tag records. Facebook already has the data required to match a huge number of faces with their names, addresses, likes, interests, political and religious leanings, and everything else.
Sure, the project is just for known criminals now, but it'd be fairly straightforward to start tracking a large portion of the world.
As an individual, yes, horrible and unforgivable is the response.
However, I don't think giving up our rights because some "junkie" decided to murder a child is a good or necessary thing to do.
What you're asking is to give up our rights for a single possible case of a person doing something horrible. No, we have a system that punishes that person already. We don't need to restrict peoples rights to prevent a rather uncommon thing from happening in the future.
You can not stop a mentally ill person or a criminal from doing something horrible. If they are going to do it, then they will find a way to do it.
The trick is to recognize the symptoms. Sometimes it is almost impossible to recognize it, but it is possible. Education can help in this case, too.
But to track people who are innocent and basically call them potential criminals before any crime is committed, is asinine.
Widely available face recognition could potentially threat societies in cities.
What would we gain from this? Targeted ads when walking in to your local store? Never having to tag another photo again? (that one is just awesome, but what else?)
Personally this is a tech that I would gladly postpone as long as possible, or at least until the whole tech-thing has stabilized. Most governments still want and think it is within their right to censor internet and is as eager as ever to criminalize cryptography - do we want such immature governments to have this tool in their arsenal as well?
>Personally this is a tech that I would gladly postpone as long as possible, or at least until the whole tech-thing has stabilized.
Don't think this whole "tech-thing" is going to "stabalize" anytime soon. And by anytime soon I mean 'ever'. :)
I think that a fundamental problem with increasing invasions of privacy facilitated by rapidly changing technology is that the melting away of privacy occurs almost immediately as the tech becomes available.
It happens on two fronts. A)Governments adopt tech towards this end almost universally, and push (and often surpass) the legal limits imposed by law, and B)Citizens continually degrade it themselves in exchange for services. Just look at Facebook, and how seemingly every service we use is stripping out information about us to build or integrate with a social graph.
You get this effect where younger generations are born with an ever increasing tolerance of privacy stripping technologies, combined with governments continually pushing past the boundaries of whats allowed. When they do go over the line, they merely deny it until they are caught (if they are), and then change the law to make their behavior legal, with no sanction for past transgressions. The population, ever evolving to respect privacy less, does not fight this.
Unless there is a cultural shift to revise, reiterate, and anchor respect for privacy to a Constitution-sized stone, I think we will see the same trend continue in perpetuity. Especially when you consider what has happened despite our Constitutional privacy protections.
The conflux of private enterprise profit seeking and government desire for control is an extremely powerful dynamic operating against privacy, and it is operating against it every day, by degrees. You are going to get your targeted ads as you walk into a store someday (probably sooner than later), and law enforcement will almost certainly have trivial access to it without a warrant, as they do to many now. I don't see this dynamic changing, and obviously technological advancements aren't going to wait around for us to sort these complex issues out.
There are plenty of potential benefits, for example I flew home (to England) a couple of days ago and rather than queue for a while to have someone check my passport I was able to pop my passport onto a scanner then have the machine check my eyes to see if I am who I claim to be, which meant that in 30 seconds I was out the other side. Facial recognition could speed that up even further.
You could get home and your front door would automatically unlock as you approach it, get into your car with genuine keyless entry rather than just the kind that means the keys are in your pocket, pop into Starbucks and be handed your regular drink (or something you ordered on a phone) by somebody who has never served you before, etc.
Don't get me wrong, there's plenty of arguments against this technology, and I'm not even saying that the good points will end up happening (there's plenty of technological improvements Starbucks could make with existing fairly basic technology already), just that it isn't hard to imagine more than just Facebook-style photo tagging.
Personally I'm not too fussed about the success rate of checking who gets into the UK, while I do care about how long it takes me to get home from abroad, so... don't really mind whether it's good security or not.
The same technology could possibly identify weapons that people had. A side effect of that would be that we could make a very legitimate argument to demilitarize the police force, if a crowd is unarmed, the police should not need guns, tasers and pepper spray are adequate.
Likewise, we would have more recordings of police interactions and be able to lower those corruption rates, as it is now they are currently resisting allowing people to film them.
Socially, if you want anything remotely reasonable to happen here, and the technology is only going to progress and get better, I think we need need to counter weigh the benefits with costs to the police. Maybe they don't need guns any more, I know most of them will be against that, if they can tell who and how many people are in a given building at any time, maybe they don't need SWAT teams. Likewise, if they are looking for specific individuals they can identify more ideal times to apprehend that person with out violence.
Existing iris scanners can be deceived by pictures of other people's irises. I do not doubt that facial recognition will be fooled by people making masks (to fool infrared scanning).
The basic principle of biometric reading is the equivalent to sliding your photo under the door to a guard. The guard checks the photo against an already existing set of photos and lets you in. They don't check that the photo is of you - which is a common failing when you let marketing design your security.
This is because everyone thinks a biometric is some sort of password. It is actually a username. You are stating I am the person whom my fingerprints/face/retinas state I am. Then the guard should challenge that person with that face to provide a password that is known to be held by that face.
Its a common fallacy that leads to people having their fingers chopped off to steal their fingerprint-recognising cars.
>images of a person of interest from security cameras or public photos uploaded onto the internet could be compared against a national repository of images held by the FBI. An algorithm would perform an automatic search and return a list of potential hits for an officer to sort through and use as possible leads for an investigation.
I feel like this is going to be the prosecutor's fallacy on steroids. Not only do you have the "cold hit" problem of DNA testing (but with way, way more false positives), but you'll end up with a defendant with near 100% odds of positive identification by the victim. Of course "looks like the perpetrator" and "positively identified by victim" almost entirely overlap as evidence, but it won't necessarily feel that way to a jury.
Are false positives really such a problem? Most repeat criminals do it because they are dumb and/or impulsive. If you get 10 hits, you just send a cop around to ask each one if they did it. The one that says yes gets arrested. A major fraction of crimes can be solved this way. Many more can be solved by slapping surveillance robots on people and catching their next crime.
Many cases will be like that yes. And if, as they say, the databases are limited to wanted persons or convicted felons, then the priors won't be terribly out of whack. But it's going to be way easier, PR-wise, to broaden a face database than it ever would have been to broaden a DNA database. It did not take long in the UK, for example, for the National DNA database to creep from "only convicted criminals" to "anyone arrested, even if they were never charged".
And that's with people's DNA, which is pretty invasive to obtain. The government takes routinely takes pictures of people; there's no visceral moment of privacy invasion there. It would be easy for the database to expand to include a tremendous number of people, and then your false positive rate really is a problem. It stops even being a reliable way to narrow people down, because there just isn't that much variance in human faces, especially if you have to deal with low resolution and/or noisy data.
I'm pretty sure that if our founding fathers were writing the Bill of Rights today there would be an amendment against dragnet electronic surveillance. It's too easy to turn into a panopticon, driven by people who like panopticons such as my mom with an attitude of: "I'm not doing anything wrong and it's worth it to protect against terrorists". Wait until they redefine what "wrong" is.
If looked at from a distance, with the creation of globalization, (free) trading blocks, 'interdependence' (inter means 'to bury', mind you), we are becoming borderless and so the bad guys no longer reside in such-and-such country. The 'terrorist' is everywhere, the 'terrorist' is/could be you. Thus the idea is to let the gov't deal with it, to protect you against you.
Btw, 'love' the argument that 'If I'm not doing anything wrong, why should I care?'
So who is a "criminal"? The article does not clarify exactly what constitutes a criminal record? Just convictions? Felonies only? Or does it include simple misdemeanors such as speeding tickets? Or does it actually (most likely) include all arrests, including those that never result in convictions (where there is NO criminal record)?
i must wonder how many here realize what the aspirations of an ipo or even a corporate investment mean in this age. so many of us are working for what we disdain - without even realizing it. yet we are the same people that can make the most difference - if we see our responsibilities in a future determined by technological opportunity.
In a sense, yes. Epigenome refers to the overall body of regulatory modifications made to DNA and the DNA storage mechanism. A lot of these modifications involve the attachment of small chemical groups to a distinct bit of DNA, or alternately the proteins which DNA is wrapped around while in storage, or a number of other possible mechanisms.
Let's say your identical twin is known to be a 1-pack-a-day smoker (assume that the hypothetical panopticon has access to his history of credit card purchases). Assume that you, however, abstain from smoking and also avoid secondhand smoke.
There will be certain modifications to your twin's genetic material which may be detectable in the next few decades. These modifications may allow the tracker to positively distinguish between you and your twin.
Regarding your question -- smoking (or exercise, or severe depression, etc.) affects your physiology, which in turn may cause changes to your epigenome.
Don't worry, they'll cross-reference facial recognition matches with your cell phone location in realtime. Just make sure you carry your cell phone at all times. Your brother will be identified by having no cell phone to cross-reference with, or one with a name that does not match the appearance.
This isn't a new hat. It is something already being used by police departments to identify suspects on video by comparing their facial attributes against records produced by analyzing driver's license photographs.
I was thinking the same thing. I'm wondering what Germany will use as an excuse to implement similar system after telling Facebook to destroy their database.
That's fantastic news! I wear those terribly unfashionable goggle-like sunglasses for old people that go over my prescription glasses. Looks like I'm safe for the near future. :)
1984, Book 1, Chapter V