[wordup] How Having Friends Might Be the Key to Both Privacy and Identity
Adam Shand
adam at shand.net
Sat Oct 25 20:43:40 EDT 2003
While this is a new application and a semi-new justification, the idea
itself is hardly new. PGP recommended a technique like this years and
years ago and called it the "web of trust". It was this idea which in
turn inspired the much vaunted, but largely failed, PKI (public key
infrastructure) systems. The problem is that once you make "having
friends" mandatory, you need a system to manage all the connections,
and it turns out that building a user-friendly system that can do this
is hard. Building a system that is protected from subversion is even
harder.
He is right though that a peer-to-peer system like Friendster might
well make this easier, though once you make it mandatory you are still
gonna have some serious scale issues (Friendster has barely been able
to scale from it's own growth and it's hardly widely used or known).
Adam.
Via: "SCHWARTFEGER,RICHARD" <richard.schwartfeger at hp.com>
From: http://www.pbs.org/cringely/pulpit/pulpit20031009.html
OCTOBER 9, 2003
I'm With Stupid:
How Having Friends Might Be the Key to Both Privacy and Identity
By Robert X. Cringely
I've been thinking what passes at my house for deep thoughts about
identity and privacy -- thoughts spurred both by my columns of the last
several weeks and the reaction of readers to them. At one extreme, we
have identity zealots who would plant microchips in our kids. At the
other extreme, we have privacy zealots who don't want anyone anywhere
to know anything about them or about what they do. Somewhere in the
middle, we can probably place most of the rest of us. It is doubtful
that we can fully satisfy both extremes at the same time, so the real
question is what path will give us the most identity for the least loss
of privacy or the most privacy with the least amount of identity?
These concepts of identity and privacy are not polar opposites, though
we treat them as such from a public policy standpoint. Privacy
advocates are not opposed to most identity verification schemes that
keep strangers from looting their bank accounts. To be fair though, a
really rabid privacy seeker probably wouldn't have a bank account at
all. At the same time, even the most staunch identity advocates
probably have a few secrets of their own they would like to keep from
public view. I certainly do.
While trying to find this secrecy sweet spot, I came to realize that
the security techniques we use aren't often the ones we really need or
even intend. Take the whole issue of Social Security numbers, for
example. This is a unique identifier every American already has, so
rather than replacing it with something better or with nothing at all,
we fixate on keeping the number secret so it can't be used against us.
But the simple fact is that Social Security numbers aren't secret -- in
large part they have never been -- and they are already being used
against us. So protecting them is like closing the barn door after the
livestock have run away, and replacing them with something else doesn't
necessarily give us more privacy or firmer identity.
Let's think, instead, about what are our real objectives in all these
actions, what underlies this desire for privacy or identity, and how
best to achieve that real objective, not some proxy for it which may or
may not be correct. In the case of identity, what it really comes down
to is that we don't want others being able to impersonate us and
thereby either take our possessions or somehow sully our reputations.
They might do these bad things through the possession of information
like a Social Security number, but it isn't really protecting the
number that's our true objective -– it’s protecting our stuff and our
access to more stuff. So too, the real objective of weighing privacy
against the good of the population is not that we have some right to
know what's in your underwear drawer, but that we have a right to
protect ourselves in case what is in that underwear drawer is a
tactical nuclear weapon.
We have the right to keep our stuff and to keep our lives, simple as
that. And anything that doesn't threaten property or safety, well, that
ought to be nobody's business. These are really simple ideas that we
tend to lose sight of in our rush to protect Social Security numbers or
assign national ID cards. Are there ways to achieve these true
objectives without stepping on too many toes or compromising public
safety? I think there are.
At the very center of identity theft, for example, is the mistaken
concept that it is about identity when in fact it is really about
theft. While it doesn't feel good knowing someone has acquired the
means to attempt to impersonate you, what REALLY feels bad is when they
use that information to drain your bank account. It isn't the
impersonation, itself, that does the damage, either; it is the bank
teller giving all your money to the bad guy that really hurts. So
forget about protecting identity. How do we hang onto our loot?
In this banking example, identity thieves take advantage of our over
reliance on one or two highly specific identifiers. Knowing a Social
Security number and a mother's maiden name is pretty much all it takes
to loot a U.S. bank account, often without even knowing the number of
that account. Yet the real question ought not to be, "Does this person
know the right identifying information?" but, "Is this person really
who they say they are?"
We concentrate too much on procedure and not enough on recognizing the
true objective. If two soldiers have the day's password and
countersign, they can get physically close enough to kill each other
without knowing that their uniforms don't match or their language isn't
even the same. We concentrate on the procedure at the expense of common
sense. We can't tell a computer virus without having a virus signature
against which to compare, yet there ought to be some heuristic we can
use in the absence of a signature -- even before it is possible to
generate a signature -- that can tell us this bit of code looks
suspicious.
But we don't find that heuristic or apply it because we've already
decided that doing so is too hard. That means we have decided to be
victims.
Here is the core problem: We are attempting to apply statistical
quality control to maximize societal safety, yet what really
compromises that safety is aberrant behavior in areas we have not
decided to monitor. So we tell the screeners to look for guns and
knives in the possession of airline passengers, but we don't tell them
to look for explosives in shoes. We invent clever identification
schemes and rely on them because they usually work rather than doing
the harder job of building an overall smarter system that can adjust to
changing circumstances.
Look at how this works just in fighting spam. Spam is an ever-mutating
pathogen always trying different ways to not only make it to our
inboxes, but to get us to open and read the messages once they have
made it to our inboxes. Something about the subject line, the message
body, or the sender is constantly evolving, and as a result, spam will
always be somewhat successful. We can't even end spam by making it
illegal because that is too hard to enforce. Making spam illegal won't
really change anything because spammers expect mainly to not succeed.
They can make an acceptable living in spite of that fact. In order for
their business to be successful, they don't need every message to get
through or to be read -- only a small number of those messages,
probably down in the single percentage points.
Spam response rates are similar to TV advertising response rates, yet
for the most part, we are okay with TV commercials. That's because
commercials actually provide us with a reward for attending -- you get
to watch the TV show. Spammers give us nothing of value beyond the
dubious opportunity to make our penises longer. TV advertisers are part
of a symbiotic system, while spammers are pure parasites.
Spammers play the odds. Most thieves play the odds to some extent,
using the system against us. Most of our protection techniques rely on
playing the odds, too, but while that may protect most of us most of
the time, that system is literally designed to fail because there are
always more good guys than bad guys, giving the bad guys the edge.
There are some car thieves who steal only cars that are parked unlocked
with keys left in the ignition. While this wouldn't seem on the face of
it like a very successful criminal technique, it actually works quite
well when there is a large population of potential victims. What
percentage of parked cars is unlocked? Maybe 10 percent are unlocked.
What percentage of parked cars is left with keys in the ignition? Maybe
one half of one percent is left with keys in the ignition. Combining
these two numbers suggests that one half of one percent of ten percent
of the cars in any parking garage or parking lot is left unlocked with
keys in the ignition. That is one car in every 2,000. If a car thief
can look inside one parked car every two seconds, he will find the
vulnerable car -- the car that requires no effort or skill at all to
steal -- in an average of one hour and seven minutes. What makes this
type of crime possible is the physical concentration of cars at the
shopping center. What drives us toward a higher concentration of cars
is that shopping is more efficient if you put all the stores in one
place.
So our effort at increasing shopping efficiency makes us more
vulnerable to theft. EVERY effort at increasing shopping efficiency
makes us more vulnerable to theft in the same way that building cities
creates targets for terrorism.
One key to fighting crime, then, is to make some activities less
efficient. At Home Depot in my town, they have started requiring a
picture ID before you can use a credit card, trying to make sure that
people aren't using stolen cards. This extra step makes shopping at
Home Depot less efficient, but it also makes credit card fraud more
difficult to commit.
We could extend this idea to banning cities and shopping centers, but
that wouldn't work either. For one thing, we've already built the
cities and shopping malls.
Yet nearly everything we do to combat crime or enhance safety comes at
the expense of reduced efficiency. So we build airports to make
possible efficient air transportation, then set up metal detectors to
slow down the flow of passengers. We build highways to make car travel
faster, then set speed limits to make it slower.
What works against us is that we have a million years of societal and
biological evolution based on the concept of small tribal groups, yet
only a few centuries of urban life and less than two centuries of mass
transit. One characteristic of tribes is that the members know each
other. So when the lady at the bank recognizes you -- really recognizes
you -- it decreases to almost zero percent the likelihood that somebody
can come in the bank claiming to be you and steal all your money. This
isn't some clever security design, but an artifact of tribal life. You
don't resent the lady at the bank for knowing you. You are flattered
that she does. You don't fear that because she knows you that you are
more likely to be a crime victim. Just the opposite -- we feel safer
because we are known.
But when we try to scale this inherent security up to urban, regional,
national, and international levels, it doesn't work. We either have to
accept less security or impose an artificial system intended to emulate
that lady at the bank. This emulation is at the heart of every security
system everywhere, yet we don't think of it in these terms.
It all comes down to scalability. If we want efficiency, we have to
scale the system up, but that comes at the expense of safety. Even more
to the point, IF WE WANT TO GET REALLY REALLY RICH, we have to scale
the system up and safety be damned because what matters to those who
run the system is accumulating wealth and power, not making the little
people feel safer.
At least that's what we all assume to be true. And as a result of this
assumption we have, for the most part, already made the decision to be
less safe. Now we are mainly trying ways to finesse the system so other
people have to pay for it, not us.
But does it really have to be like this? What if there was a better
way? What would that way look like? And is there any way we could use
technology to make that happen?
No, it doesn't have to be like this. Yes, there is a better way
(probably several better ways), and technology can be a great enabler
if we use it correctly.
What we need is a way to apply tribal values to international
activities, a way to bottle whatever makes that lady at the bank so
effective, and then spray it wherever we need it.
There are three keys to the system I am about to propose and the first
comes from Volvo, the Scandinavian automobile maker now owned by Ford.
Volvo built more than 400,000 cars in 2002, which qualifies in my view
as mass production -- in other words, a scalable system. But unlike
most of its competitors (even Ford, its owner), Volvo doesn't build
cars on a traditional assembly line. Rather, they are built in what are
called "assembly cells" where a small group of workers build each car
more or less in place. This kind of manufacturing is not as efficient
as using an assembly line and it requires better worker training, but
it also results in higher overall quality and greater manufacturing
flexibility. That Volvo is a successful car company tells me this
manufacturing system is economically competitive. It is also tribal. So
now we know that in order to get really big, you don't inevitably have
to give up tribal values and methods. If Volvo can do it, then others
can do it if they choose to.
The second key to my proposed security system also involves choice --
choosing when to apply the system and when not to. We have a tendency
to come up with some new way of doing things, and then apply it to
everything whether we need to or not. It's like in the book Catch-22
when they had to sign a loyalty oath in order to eat lunch. The issue
here is how often do we really need to prove that we are who we say we
are? Certainly if there is a lot of money involved or a big safety
risk, I say go for it. But if we build a system that makes it
effectively impossible to drain a bank account, overload a credit card,
or crash an airliner into a skyscraper, how much at risk are we for
smaller identity crimes? If the big score becomes impossible, I
maintain that reduces the attraction of smaller scores, and the overall
system will become safer.
Final key to my proposed security system is the actor Kevin Bacon, who
seems to be six degrees or less from every celebrity alive. You see, in
a tribe or village, identity is established by personal relationships,
but to this point, we haven't been able to map those relationships over
long distances. But now, using technology, we can.
My system is based on a registry of friends because we all participate
in virtual tribes that are geographically dispersed. Every person who
wants to have credit, to make a big purchase, or to board a 747 has to
have a list of 10 friends -- people who can vouch for their identity
and know how to test it if needed. That takes us out of the realm of
the mother's maiden name, replacing it with, "What was the nickname I
called you in the fourth grade?"
I am Bob, and these are my 10 friends.
They don't even have to be friends -- just people who know you. You
don't have to tell them they are on your list and you can change your
list as often as you like.
Imagine an aerial view of this network of friends. It is so large it
could only be analyzed by a big honking computer, but there is a great
deal to be learned from that analysis. People could disappear and be
noticed, perhaps to be found. Deadbeat dads could be tracked, as could
sexual predators. Epidemics would ripple across the surface of the
model, perhaps leading to targeted anticipatory preventive care, saving
lives. Guys who buy enough fertilizer to blow up a Federal office
building would stand out.
Now before you can say the words "Big Brother," remember that YOU
choose your list of friends so they can be people from work, from
school, from the tennis club, but perhaps not from your Communist cell
or from your swingers club. You can keep private what you want to keep
private because the big picture is what matters here.
The system would be tied together by phone, e-mail, and Internet
messaging. Ultimately,, it would come to function like a much larger
version of eBay's feedback system which would result in subtle pressure
toward more civil behavior -- something we don't have in any practical
sense today.
Maybe this system wouldn't work. You tell me. But I know that what we
have right now isn't working, and I am not sure it can be made to work.
The only answer that makes sense to me is to hearken back to a simpler
time when these crimes just didn't' happen. And it is only through
clever application of technology that this can be done.
But it really needs a clever name. Too bad Friendster is already taken.
More information about the wordup
mailing list