Category Archives: Technology

Data Mining and Doctor Who

What do data mining and Doctor Who have in common?

data-miningdrwho

 

Start with data mining.  Some companies sell different versions of a database or subscription service, based on your location.  They determine your location based on your computer’s IP address.  For example, if you live in the UK and you want full access the UK version of the database, then you pay full price, but if you want to add other locations, you can get it at a fraction of the cost.   In other cases, due to licensing restrictions, if you have an IP address outside a specific country, you simply can’t access the website.

It is the same for Doctor Who.  If you live inside the UK, you can watch Doctor Who live and streaming via the BBC.  However, if you try to access from the United States, you get this message.

BBC message

Basically, you can’t view it without having an IP address in the UK.

But you really want to access a particular website and crunch all that data… or you want to really watch Doctor Who live, the day it comes out.  What do you do?

Simple:  Sign up for a VPN service.

Not all VPN options are the same.  Some have point to point utility.  For example: from your laptop you can securely connect to your company’s office router.  This is what most telecommuters are used to.  This is not the type of VPN I am talking about.  The VPN I am talking about is a location select-able VPN.  How it works:  I buy access to the service.  Next I can browse the Internet, securely, appearing as if I am in San Francisco,  Dallas, Amsterdam or London (London is in the UK).

So go out and get your private VPN.  It will take a few minutes to install the VPN client on your Mac or PC.   Next,  data mine or watch Doctor Who to your heart’s content.

Some VPN services:

http://witopia.net

https://www.privateinternetaccess.com/

http://www.ipvanish.com/

https://www.vpnmentor.com/

The 12th Doctor’s first episode premieres live on the BBC Saturday August 23rd at 7:50 GMT.

 

My First Month with Google Glass

What was I thinking?  Spend $1500 on wearable technology that is not quite ready?  The same desire that made me get the first Apple 1, the first PC,  the first iPhone, Android (Gphone back then) and many other firsts.  When you want to understand where technology is going, you can’t live it from afar you must immerse yourself.

Glass-mouse

 

 

 

 

 

There are 2 parts of the Glass experience: (1) How others react to you and (2) what you experience

How others react

This is funny, top questions

  1.  Are you recording me now?
  2. Did you take my picture?
  3. How do they work?
  4. Can I try them on?
  5. How do I get a pair
  6. Can you see through my clothes?

From the questions,  I could tell people are a bit confused.  There has been some negative press.  “Glassholes” wearing Glass into a bar and acting inappropriate.  These are probably the same people who you would not want to be around anyway.  I think this was a mistake by Google.  The first set of people that got Glass were people that liked gadgets and had money to afford it.  They should have made the selection more stringent;  people who actually wanted to build something on the Glass.

My Glass Experience

After the first day, I had buyers remorse.  After the second day I was on the fence.  By day three, I was seeing all sorts of new applications that could be built on the Glass platform.   Fun.

Today, I drive, email, record videos, take pictures, attend conferences and generally have fun with Glass.  The battery life is short.  I always travel with the charging cable.

Today is the day that Google is selling Glass to everyone (I was on an 18 month waiting list).  It will be interesting to see how wider adoption will impact the platform,  public acceptance and applications available.

As an experiment, the next person that asks me if I can see through their clothes with Glass, I am going to do my best to say “yes” with a straight face.

 

 

 

When Marketing Lies About Technology

I’m at a talk about marketing at a conference, sitting in the audience, blending into the mix of SEO students and experts. Unlike most conference, I am not speaking, not helping with sales at a booth and not scheduled with back-back meetings.  This is a chance for me to sit and learn.

At the end of a fantastic panel discussion on SEO tools, demand generation and technology, the panel went into the Q&A section of the talk.  One panelist was asked what made her technology better than the next tool.

“We spider the entire Internet, every day. Every site and keyword, everything, so we have more data to work with.”  She said.

Looking around me, I saw eyes wide and heads nodding.  They swallowed it.  What happened next was like an out-of-body experience.

“Buuuuullshit!” I said, just-loud-enough for the group in the small theater to hear.  I just couldn’t help myself.

I was then asked by the moderator to, basically, explain myself.  I proceeded to talk about why “spidering the entire Internet” was not possible.  This is an area that I am a subject matter expert.  I won’t explain it hear, but if Google can’t do it…well, you get the idea…  I then asked if she borrowed Google’s new quantum computer and got a few laughs.   My goal was not to ridicule, but to recover from my sightly louder than expected comment.  Next, I basically said that I was impressed with what their technology did, actually do, but it shouldn’t be misrepresented as “everything on the Internet”.

Her comment was that she was not the “techie person” and that she got over-enthusiastic.  People laughed and that was the end of it.

The point is that Marketing does not need to lie, it would have been just as impressive if she portrayed, accurately, what they actually do and how.  This is a problem in many technology companies.  The process starts very much like a myth or legend.

“Any sufficiently advanced technology is indistinguishable from magic.”
Arthur C. Clarke

The technologist creates something that looks like magic and Marketing tries to explain it and the legend grows.  Soon, Sales is fabricating any explanation that sounds good and a technology myth is born.

Don’t do this.  Technology, Sales and Marketing need to be on the same page.  If you don’t achieve unified messaging someone else is going to call bullshit and you will lose a sale.

 

My vision of big data, 11 prerequisites

Late yesterday I learned from Twitter that Whitney Houston died.

99% of the Whitney Houston Tweets were exactly, “RIP Whitney Houston.” Okay… but how did she move you? Do you remember a special dance with that one girl while she was singing? Was her voice so beautiful that it made you tear up? It was for me. Originality is there, but buried on Twitter. I would have enjoyed others insights on Whitney, to feel camaraderie in a shared loss. If it existed on Twitter, it was obfuscated behind all the drone “RIP Whitney Houston” tweets. So instead I played some Whitney songs and told my children who the woman with the beautiful voice was.

Twitter is big data.

“Big data” is making the news. The concept has crept from the back pages of technical publications into the mainstream. It’s a new topic, so the reporters have commandeered it. It’s becoming popular, and that’s too bad. Media feeding frenzies perpetuate the peripheral definition; articles get copied over and over again, and people stop thinking.

With their IPO in the news, Facebook has become the poster child for big data. So what is it? What is big data? Simply put, massive amounts of information about millions, and eventually, billions of people. Big data is making the news because of fear – fear of the possibilities of abuse. It sells newspapers, gets clicks, and page views which means we will be hearing a lot about big data. Scare people and make money.

Facebook is big data.

Google is changing its privacy policy. Another media feeding frenzy. If you have a Gmail account, Google+, music, shopping, etc. All the privacy policies are melding into one. I like the idea and I have to admit, I don’t understand the problems people are having. If you use 5 or 10 different Google services, are you really going to read many different user agreements? I don’t know anyone who actually does. I would prefer to have one policy that covers them all. Google gives these services away, if you don’t like that one, single policy – stop using the service. The chances of people being informed about Google’s policies will increase if they have a single policy. It’s a good thing. Stop the bitching.

Google is big data.

Another bit in the news. The Seattle Times reports a top porn site, Brazzers, was hacked. From the article,  and other news about regarding it, usernames, passwords and real names were hacked. The data is making its way across the Internet on file sharing sites.

Internet user databases are big data.

In my vision of the world, big data is in its infancy. Don’t freak out for at least 10 years.

Why now? Why is big data coming into mainstream now? It has been around for many years. Large data providers like Experian, Axiom, and D&B have been collecting data for a long time. What is different now? To ask “why now,” you must understand the continuum of getting at big data.

11 big Data Prerequisites

  1. The data must be there – this is the most exciting tipping point.  In being the CEO of a data-mining software company, I’m still dumbfounded when users expect to get information off the web…that is not there.  It must actually exist.
  2. You must be able to flag it – you can’t store everything and must make choices.  What is important? When does it happen?  Example:  News release with subject: Nanotechnology
  3. You must be able to find it – in the absence of a real-time data stream, you must able to search though data to find a “flag” of what you are looking for.
  4. You must be able to parse it –  this is the analysis of relevant grammatical constituents, identifying the parts of what you need, from within potential noise.  Example: parsing out the name of an inventor from within an article on nanotechnology
  5. You must be able to extract it – Not the same as parsing.  What if the data is in a PDF file or HTML web page?  In many cases, extraction is about access.  Is the data I am looking for across 5 sub-links of a single web page?  Extraction as it relates to the Internet also encapsulates web crawling.
  6. You must be able to process it – This takes CPU cycles. Bigger problems need bigger computers.
  7. You must normalize it – If you have multiple pieces of data on “The Container Company”,  “Container Company, The”, “The Container Co”, etc,  how do you merge that data?  You must normalize like entities to a standard “canonical form”.  With out it, we’ve got the Data Tower of Babel.
  8. You must be able to store it – Big data takes up disk space.
  9. You must be able to index it – If you ever want to find it after you store it, the data needs to be indexed.  This also means more disk space.
  10. You must be able to analyze it – big data needs big (or many distributed) CPU’s to crunch the numbers and garner order from the chaos.
  11. There must be a payoff – Putting together big data is expensive.  Without a end goal in mind, it is expensive to collect.  Google & Facebook collect, process, index & store data for profit.

So what is my vision of “big data”?  What is being talked about in the media is very short sighted.  I think I know where big data is going.  I’m basing my vision on my prerequisites.

Big Data Thoughts

1: Information is growing beyond the ability of any single source to store and index everything. Therefore, big data can never be “all data.” Facebook and Google cannot store everything. Therefore choices must be made. Google already does it; indexing what they deem relevant.

2: The amount of data about people on Facebook is paltry…compared to the maximum possibilities. Yes, in aggregate, it is the largest set of minimal data. Think for a second about your day. What would it take to record your entire life in HD, from 7 different angles. This future data stream would include everything you heard, read, and generally interacted with.

3: Mass, personal data recording is on the horizon. The first phase is already starting. The only limit is reasonable storage. The term is called “LifeLogging.” There are devices that you can wear and it will take a picture every 30 seconds. High quality LifeLogging technology will be critical in the future. Every 30 seconds is 1/900th of video (30 frames per second). If the Lifelogging device is just the conduit vs. the storage medium, the lifelog could be stored on your home PC. With h.264 video compression and 5.5 hours of 1080p video can be stored on a 32GB thumb drive. That means a single 1TB (terabyte) drive can hold 176 hours of hi definition video (7.3 days of video). It would be expensive today to buy 52 X 1TB drives to store a year of your life. It seems crazy… right? Not when you are a historian. In 1992, the average hard drive was around 1GB – 1000 times less than today.

Some ideas to reduce the storage size of LifeLogging:

-Go vector. If you have an avatar created of you, a vectorized version of you could be stored. This type of compression does not exist, but it will. LifeLogging in bitmap video is like a tape deck. Vectorizing video with the lifelogee as the center of the story would save 1000X the storage. It is like the hard drive compared to tape storage. In addition, storing data in this way could be accessed very quickly. Bottom line: with the right *Software* real LifeLogging could be done today. I should save this for another in-depth blog. I’ve spent many nights thinking about how it all could be done. I’ve got to stop watching Sci- Fi before bed. Lawn Mower Man

4: Assume that we are in the 2020′s. Based on Moore’s Law, and several others, A LifeLogging device will be able to be worn around your neck, and record your life in HD. They’ll probably be the price of premium iPad. At that level, LifeLogging is ubiquitous.

5. What did I eat today? What about over the past week, month, or year? Just because that information, is recorded, as video (me munching Apple), does not mean that it can be analyzed and recognized as Donato-eats-apple. Where did you buy that Apple? Can the date of the purchase be cross referenced with the date that you bought it at the grocery store?

New industries

Software that analyzes and makes inferences from LifeStreaming (the will be a multi-billion dollar industry. (Donato ate apple, Donato started car, Donato got phone call, Donato was watching the movie Contact). I would expect that each major type of world interaction would be handled by a different app or algorithm.

Software that compiles inferences, builds statistics and performs what-ifs on mass LifeStream data will be multi-billion dollar industry. (23% of people that ate apples 4x per month, where the apples came from Chile, and most likely were treated with chemical X, developed cancer by age 55). These are the types of discoveries we will be able to make that are currently only made by virtue of a happy accident.  (I made up that example…but do eat organic apples).

Example: compiling a list of the junk (postal) mail letters that I throw out without opening. That is good data. What is the one that I opened?

Software that manages the rights, payments, connectivity and privacy between life streams will be a multi-billion dollar industry. So if that apple from Chile used some real nasty pesticides – like a carcinogen? Could that supplier of that apple to the store be tracked? Do you want to know this? What if your wife bought it… and it is not part of your personal data stream? Do you and you wife have a LifeStream sharing agreement?

One person, eating one Apple does not a trend make. Multiply that by 50 million people over 5 years. This is not science fiction. This is simply faster computers, more memory, and analysis software. It’s a lot of Apples. Do I want to share, if it was anonymous, my eating habits and cross reference it with my health…maybe.

I expect that companies will pop up, each with a different set of analysis technology for different niches. It will probably evolve into an AppStore model. One company looks at how you interact with media, what you watch, listen to, theaters attended. Another knows what you eat. You can choose which feeds to share with the greater LifeStream and take part in a greater community.

By the way, none of this LifeStreaming will be on Facebook, or Google+. No one would trust them. In addition, it would be prohibitively expensive to centrally transmit, store and analyze it. Hmmm, maybe Facebook could be the trend builder? It is well positioned for it. Can you imagine it?

Donato ate an Apple
Donato threw core in garbage
Donato did not recycle V8 can
Donato is driving 15 miles over the speed limit

This is the first time in a few years that I thought of a way for Facebook to survive long term. In this Facebook, you would never log in to look at what people are doing, you would log in to see that latest trend and how it affected you.

I just hope it does not make it to twitter and get retweeted by the “RIP Whitney Houston” drones. Once analysis agents can understand (and broadcast) our individual actions, Twitter has no reason to exist.

Big Data equals big money.

If it is possible, and someone can profit, it will be collected.

 

The decline of Apps and the rise of Agents and Clewds

Last week, while presenting a live webinar “The Near and Far Future of Recruiting” I had an epiphany.  I was talking about the eventual decline (or morphing) of Facebook.  The theory is this: Mobile computing power in 10 years will be server-capable.  Add in violation of trust and general mistrust of social networks.  The result is peer-peer social networking.  No Facebook needed.  Everything sits on your mobile device.  More private, more secure, total user control and no ads.  Facebook may lead the way, but it will be hard to do as they would cannibalize their own ad-driven revenue model.

This was last year’s Epiphany.

What led to the new epiphany was my pontificating on CRM systems.  This was a recruiter-centric talk about the future of recruiting.  Many recruiter CRMs have connections to LinkedIn profiles.   Every one of these, that I have seen, has been implemented incorrectly, not due to any fault of the vendors.  In an optimal situation, the data inside the Profile should be mashed up with current CRM data.  Instead, LinkedIn requires usage of their API which brings back a canned LinkedIn profile. This is what I call “social linkage”.

The optimal situation would be a pair of  “social agents”.  While a company may have 1000 company prospects  in their CRM, they may only contact 50 in a given day. One “social agent” would automatically refresh the entire CRM on a longer cycle such as once per quarter.  Another just-in-time social agent would update the CRM just before the outreach process.  Why is this important?  LinkedIn is not a definitive data-source; nothing is.  What happens when you combine Facebook, Google+, Jigsaw (now data.com), Foursquare, twitter and whatever social network Microsoft comes up with?  Are you going to clutter your Salesforce or Microsoft Dynamics interface with 6-8 little snippets, much with redundant information?   This gets ugly fast.  The optimal implementation is to have a social agent retrieve LinkedIn, Data.com, Google+, Facebook, Twitter information.  Next, mash, score, apply analytics to present the information in a way that optimally fits your selling model.

Continue reading The decline of Apps and the rise of Agents and Clewds