Please visit the new, improved (and source of future updates) SSIT Web site blog:
Please visit the new, improved (and source of future updates) SSIT Web site blog:
The Jan 22 Wall St. Journal has an article by Geoffrey A. Fowler evaluating two new very small cameras designed to do “life tracking” (taking a picture every 30 seconds or as your environment changes) … like 2000 pictures a day. This echos some of the discussion from the ISTAS 13 conference where the implications of this technology were a major consideration.
Geoffrey presents a benefit of having a fairly complete picture of your day, when he scans back over the day to find where he left his glasses (keys, whatever). And of course the question of civility raised if you are taking everyone’s photo just by being there. (He added a camera icon to the outside of one of his devices to make this more obvious to folks.)
As this technology shrinks and gets integrated into things, the idea of “photo free” zones may be impractical. The implications will be widespread … can your device be used as a witness against you? — in say that car accident, or “who was that woman I saw you with last night?” Witnesses to events will have more than just their memory to draw upon in trying to recall the details. (Some of these devices have built in GPS units so they capture location and time as well as the ‘view’).
The article also points out that during a hiking trip with the family he obtained a dozen good candid pictures along with 1000 for the trash bin. So either most of the content will be “write only” — never to actually be used/viewed/curated …. or will take up significant time to sort the wheat from the chaff.
Geoffrey provides a useful way to evaluate the technologies that may require an extra amount of consideration, his “relationship test: How does this piece of technology change not just my life, but how I interact with you?” — A useful question to add to the SSIT lexicon.
The 14 Jan Wall St. Journal has an article noting that your cell phone is being used to track where you are, and not by the cell phone provider (well, ok, they do as well, but using the cell-tower location process). This tracking occurs when you have your WiFi enabled and pass a detection device. Turnstyle Solutions and Apple iBeacon (BlueTooth) provide devices placed by shop-owners and others to detect, record and report your location. Turnstyle works with your devices WiFi MAC address, and iBeacon with iOS on your phone. iBeacon provides location data for Aps, but also for the host location.
The good: Knowing you are there may allow you to pay for goods at checkout without having to get out your credit card. It may provide you with immediate “discount coupons” or other offers. The Apple concept with BlueTooth is promoted as a way to provide ‘fine tuned’ personal (identifiable) services such as payment, or any other service that your phone apps using location services may be able to provide.
The bad: Turnstyle is not tied to apps, your cell provider, or your phone OS. It simply uses your MAC address (which is part of the handshake that is periodically being transmitted by any WiFi device to identify possible connections.) An intended service Turnstyle provides their customers is a composite of “what other locations your customers visit”. A restaurant has offered branded workout shirts as a result of feedback that 250 of their customers went to the gym that month (or at least to a gym that was in the Turnstyle network.) One Turnstyle customer is quoted as saying “It would probably be better not to use this tracking system at all if we had to let people know about it.” I find that insightful.
Turnstyle also offers free WiFi in various retail locations. The information about your sites visited, searches, etc. can be used to further classify you as a consumer — without collecting “personally identity” information (maybe.) Any number of combination of data-mining techniques can be used to get fairly personal here — via Apps, site usernames, email addresses disclosed, etc.
The ugly: In the context of the article, the example of a problematic location tracking might be your visits to a doctor, say the oncology clinic in a monitored area. Combine that with searches on selected drugs and diseases and what you thought was private medical information is now available, and perhaps bypassing heath privacy regulations.
Consider the “constellation” of radio beacons you either transmit or reflect. My car keys have an RFID chip, some credit cards have these (and passports), you have unique cell phone ID, WiFi MAC address, BlueTooth id, Apps that may be sending data without your awareness, etc. While any one service may be protecting your anonymity the set of signals you transmit becomes fairly unique to you. Connecting these with your identity is probably more a question of the abusers desire to know than it is a question of your rights or security measures.
These mechanisms can be used by paparazzi, stalkers, assassins, groupies, and other ne’er-do-wells for their nefarious purposes.
In the Harry Potter series the Marauder’s Map was used to track anyone, at least within Hogwarts. To activate this “Technology” you tapped it with a wand and declared “I solemnly swear that I am up to no good.”
Where have your footprints been taking you lately? And who has been watching them?
The Jan 9 Wall St Journal points out that credit analysts are starting to use your Facebook, LinkedIn and eBay activities to evaluate you. For example, does your job history and status on these sites correspond with the one you submitted in an application? What are buyers saying about you on eBay (assuming you are selling stuff there?) , etc. In short, your “rep” (as in reputation) is being tracked as it spans social media.
This is added to the “75% of employers check your social media presence before pursuing an interview” (feedback from an HR friend of mine). Universities that use your presence as part of their acceptance process (are you really sure you want those party pictures on-line?), and even schools that have expelled students for violations admitted on their social media sites.
Scott McNealy asserted “You have no privacy anyway, get over it“, and it appears the NSA may concur. However, it is not clear this is a situation we should take lying down …. anyone want to stand up?
In the SSIT LinkedIn discussion a pointer was posted to a provocative article “The Closing of the Scientific Mind” from Commentary Magazine. It raises many issues, including skepticism about the “Singularity” and the Cult of Kurzweil (a delightfully evocative concept as well). One comment posted in that thread suggested that the Singularity was ‘silly’ … which is not a particularly useful observation in terms of scholarly analysis. That author sought to dismiss the concept as not deserving real consideration, a concept that deserves real consideration (IMHO).
First, let me provide a reference to the singularity as I envision it. The term originated with Vernor Vinge’s paper for a 1993 NASA Conference. It identifies a few paths (including Kurzweil’s favorite: machine intelligence) towards a ‘next generation’ entity that will take control of it’s own evolution such that our generation of intelligence can no longer “see” where it is going. Like a dog in a car, we would be along for the ride, but not have any idea of where we are really going. Vinge includes biological approaches as well as machine (and perhaps underestimates bio-tech approaches) which establishes the concept beyond the “software/hardware” discussion in Commentary Magazine.
Why might we be able to dismiss (ignore) this concept?
I can envision one bio-tech path towards a new species, it is outlined in Greg Stock’s TED talk on upgrading to humanity 2.0: We add a couple of chromosomes for our kids. I’m thinking two — one has “patches”, for the flaws in our current genome (you know, pesky things like susceptibility to diabetes, breast cancer and Alzheimers), the second has the mods and apps that define “The Best that We Can Be” at least for that month of conception. Both of these will warrant upgrades over time, but by inserting the double hit of both into the the genome (going from 46 chromosomes to 50) the “Haves” will be able to reproduce without intervention if they wish (of course not as successfully with have-nots due to a mis-match in the chromosome counts.) Will such a Homo Nextus species actually yield a singularity? Perhaps. Is this a good idea? — Well that is debatable — your comments welcomed.
A Dec. 17 Wall St Journal article outlines how data mining is being used to identify persons with selected illnesses. While the objective in this case is to find patients for clinical trials, the same approaches are used to target folks for ads, and potentially could influence heath, long term care, or life insurance. For anyone close to the U.S., it would be hard to avoid the political football being played with health insurance over the last few years. But there is a fundamental question about the nature of insurance in a data mining world: is it no longer ethical?
First you need to really consider what insurance is. If you have 1000 homes in a town, one burns down each year, the average replacement cost is $100,000 then a premium of $100/year is the minimum for an insurer to break even. But most insurance companies are “for profit”, and do not want to break even. Actually, they do best when they don’t pay out benefits at all. And here we see the essential tension between the insurer and the insured. Is it any surprise that when a new health care law takes effect in the U.S. that health insurance companies are dropping their higher risk patients? This is what for-profit means! When you can reduce your risks and raise your revenues you deliver profit.
But the WSJ article points out a disturbing trend — data mining to isolate affected individuals. The article points out that the records used; credit card records, online shopping, search preferences, TV viewing habits, vacation styles, cars driven are part of the mix. “We are now at a point where, where based on your credit-card history, and whether you drive an American automobile and several other factors, we can get a very, very close bead on whether or not you have the disease state we’re looking at.” Roger Smith, Sr. VP of Acurian. (WSJ)
If Insurance companies can sort out higher risks, then the amortization of risk over large populations can be eliminated. In effect the insurance premium becomes a pre-payment of future expenses (plus overhead and profit.) Increasingly insurance companies are following this path. Flood insurance (historically supported by US Government reinsurance) is rising in price dramatically, and home-owners in wildfire risk areas are being dropped, potentially if their neighbors have not cleared their properties. It is easy to argue that folks who build glass houses should have higher ‘stone’ insurance rates, but as this moves from a large “pool” to a “pool-of-one” the concept of insurance disappears.
With the emergence of full genome mapping, individual life style tracking, big data and data mining, the profit opportunity will drive insurance companies to sort customers as accurately as possible. The result may be the elimination of insurance, at least in certain areas, as a way to manage risk.
An informative counter-point is the U.S. medicare system, this spreads premiums over all employed persons and insured persons, and does not discriminate by preexisting conditions or the more advanced technology that is emerging. Of course it is a not-for-profit government program which is yet another political football.
Has your insurance been affected by these processes? … Don’t comment if so, this blog is mined!
To get technology done right, the folks ‘commissioning’ it need to understand a little bit about what it takes to do it right. Case in point, the U.S. Affordable Care (ACA) website. This has generated a lot of heat, and little light due to the political stakes and opportunities associated with this particular issue. So lets ignore the politics.
Information Week points out that 40% of major IT projects fail, and I’ve heard higher percentages. And many of these projects totally fail, not just “can’t handle the traffic”, etc. In short, the expectation that this site would work was optimistic.
CNBC reports that the security for the web site may not be sufficient and MIT Technology Review points out that the complexity of the project (interoperability with 1000’s of insurance company sites, real time integration with IRS, and ‘last minute’ requirements changes) with no phased in testing process were complicating factors as well.
Software projects, particularly large scale ones with highly visible deadlines and significant social impact require extra consideration on the part of those commissioning the production. There is an emerging awareness of the need for software engineering as a recognized (and licensed) professional skill. (See IEEE Institute article.) The ACA project is just one of many where this skill set, well established and documented by the IEEE Computer Society, is essential.
So we know how to do this, but then it requires an essential understanding on the part of the people involved. Software engineering training, certification, licensing and capability maturity models can only take you so far. You need people who understand these things as well as how and when to apply them. And these people need to be on the “commissioning” side of the activity as well as execution side. Corporate or governmental leaders who think, “oh that’s a simple matter of programming” ..don’t get it. Failure to have clearly defined and comprehensive requirements is a critical part of project success. Systems like the FBI Case File point this out clearly (a contributing factor to that 100 million dollar failure was a continuously fluxuating set of requirements.)
Given the ACA challenges it was non-trivial that the site has become even partially functional.
If we can look beyond the political muck-raking, and consider the lessons to be learned from this situation we just might be able to find our way to a more satisfactory approach to applying technology to meet social objectives.
Your examples are solicited as comments below!
This week (WSJ Nov 26, 2013) the US FDA warned 23andMe to stop marketing their genome analysis services. It is worth taking a look at the issues raised since they vary from lab procedures to the nasty question of who owns your genome (in the U.S.)
An issue identified by the WSJ article is “scrutiny of their laboratory processes.” This is then reiterated in part by the phrases “false negative” and “false positive”. It certainly seems reasonable that a service in this business should have as high of assurance that you get results based on the materials you submit, and that the genes identified are actually in your genome. However it appears that this concern is not the real FDA target.
The false positive concern, expressed by the FDA as ” For instance, if the BRCA-related risk assessment for breast or ovarian cancer reports a false positive, it could lead a patient to undergo prophylactic surgery…” Which seems to presume that a Doctor would undertake such surgery without obtaining confirming lab results, a path towards malpractice. The false negative concern has more merit … folks convinced by the results that they are not at risk for a disease and ignore the symptoms.
23andMe has a detailed, readable (!) and informative Terms of Service statement. It covers the types of disclaimers that one would expect, and also some interesting warnings such as “knowledge is irrevocable” — you may not like what you find out. “You may learn information about yourself that you do not anticipate” ancestry, parentage, etc. may not be what you think. “Genetic Information you share with others could be used against your interests”— such as health care providers, insurance companies, employers etc. Including the fact that you can not claim that you have not been genetically tested to an insurance company, or answer some questions like “do you have any reason to believe…” on health questionnaires. While, in the U.S., there are laws limiting abuse of genetic information (see http://www.genome.gov/10002328), and in theory it cannot be used in Health Insurance, it can be used in life insurance or long term care insurance descisions.
The philosophic crux of the situation is captured in this phrase: “the risk that a direct-to-consumer test result may be used by a patient to self-manage, serious concerns are raised if test results are not adequately understood by patients…” This is where the question of data ownership surfaces. Do you have a right to know all or part of your genome? This phrase suggests you do not. “Direct to consumer” bypasses an important (political funding) constituency known as the medial professionals. And while I am sure there are persons who are not competent to handle genomic information about themselves, it is unclear that regulatory restrictions should prohibit all of us from such information. Ultimately knowledge is power, and when such analysis is done, we have to ask who holds the power aka knowledge? IEEE Spectrum ran an article about the full genome analysis in March 2013. I was surprised that the writer was not allowed access to her results, but only though a medical interpretation of a community of doctors. In essence the companies running the tests and the genetic counselors interpreting them do not recognize the right of an individual to know themselves at the genomic level.
23andMe only evaluates genes of interest, not a full genome report. This reduces the possibility that next week’s research will disclose a new genome associated risk that is evident as a result of your already competed tests. What will you do if a “dumb” gene is discovered, and you have it? (ok, you wouldn’t be reading this blog is my guess.)
Action? — A petition exists at Whitehouse.gov to ask the President to override the FDA warning. However, this only touches on the essential question of genomic ownership and rights. Some of these questions were raised in this blog in March, and need serious consideration. This area is moving forward on all fronts. The supreme court has ruled that genetic content cannot be patented, public figures have undergone preventive surgery, and patents have been issued for designer babies.
So who should have the rights to your geome, and why?
“To imagine how things could go bad, we have to imagine these charming techies [today’s beneficent tech company leaders] turning into bitter elders or yielding their empires to future generations of entitled clueless heirs.” (How Should We Think about Privacy, Scientific American, Nov. 2013) [And yes, this is the same one quoted in the last entry, good article — read it!]
I’ve wondered what the half-life of a Fortune 500 company is. I’ve worked for a few, some like Intel when they were not yet in the 500, some like IBM were and are, and some like Digital were and are no more. It is clear as we watch the life-arc of companies like Digital, Sun Microsystems, Xilog, etc. that most do not have particularly long lives — maybe a decade or three. And I sense that this window may be shortening. Venture capitalists always look for exit strategies – often selling a creative startup that is not getting traction to some lumbering legacy giant who thinks they can fix it. (This seems occasionally like getting your pet “fixed” more than getting your car “fixed”.) But some beat the odds and make the big time. Investors happy, founders happy, occasionally employees (shout out to the Woz who made pre-IPO stock available to Apple employees) — but when things get big a different mentality takes charge — milking the cash cow. One reason the big-guys buy up the little guys is to keep them from disrupting their legacy business models. Even when they have good intentions they often are clueless when it comes to keeping the momentum going with the innovators and/or customer base, and eventually drive their acquisition into the ground.
The point of the Scientific American article is that the transition of surviving corporations to new owners, or even loss of shareholder control, can turn a company dedicated to “Doing No Evil” in a different direction. The article suggests that the current “convenience” facilities in your smartphone that suggest an <advertisers> spot to get lunch based on your location and local time could go further, controlling your action at pre-cognitive levels with pavlovian manipulation of your actions. — It may not be what the founders of Google, Twitter, Facebook, etc. feel is right … but then, the decision may be mandated by new ownership, stockholder interests, etc.
It has been suggested a military device has never been invented that was not eventually used in warfare. I won’t swear that is true, but the corollary is that no path to increased profits has been identified that has not eventually been used by corporations. At the same time, we have few ways to protect ourselves from future abuse — policy entities are hesitant to take preventative actions — and corporate interests are strongly involved once they smell blood … excuse me, cash.
“Look no further than the massive statistical calculations that allowed American health insurance companies to avoid insuring high risk customers..” (How Should We Think about Privacy, Scientific American, Nov. 2013) This is a significant and timely observation as the U.S. continues it’s ongoing battle over health care. But look closely at what is being said, and where it leads in the future.
With the accumulation of both personal data (health insurance companies have access to much or all of your health data) and millions of other individuals records, insurance companies can make very informed decisions about rates, pools (aggregating similar risks) and accepting applicants in the first place. Given the profit incentive to do this well, they have learned to do it well. It should be no surprise that with a mandate by the Affordable Care Act that these suppliers have applied triage, raising rates or canceling policies for various groups to assure profitability, or purge individuals with greater risk. This is what capitalism is all about. Somehow American consumers do not understand that insurance is not a “game” where you only play if you can win (i.e. expect benefits greater than your anticipated costs) … insurance used to work on the basis of ‘blind statistics’ — average costs for risk “xyz” and expected occurrence of “xyz” determined the the insurance premium for “xyz”. But the process is no longer blind, it has become well informed, and in the future will close to fully informed.
What does fully informed mean? Well, with full Genomic analysis (and some related work), and full life style details, insurance will increasingly become “pre-paid medical expenses” with a profit margin for the Insurance company. There will be a “pool of one” — you. And the “insurance company” will be monitoring you via many channels. While “discrimination” based on genomics for health insurance is illegal in the US — that may be a moot point with other sources of information available — such as Doctors actions based on such information, prescriptions, web postings, etc. We could reach a point where the real margin for health insurance companies comes from accidental deaths where individuals provide a windfall of pre-paid anticipated expenses that will go unclaimed.
As the title of this column suggests, technology may render Insurance obsolete — albeit a drain on the consumer economy for the century or so it will take for folks to realize this is the case.