Facebook and the Rise of the Anti-Facebooks (#Ello and @joindiaspora)?

diaspello

“Metcalfe’s Law points to a critical mass of connectivity after which the benefits of a network grow larger than its costs. The number of users at which this critical mass is achieved can be calculated by solving C*N=A*N²), where C is the cost per connection and A is the value per connection. The N at which critical mass is achieved is N=C/A.”

Bob Metcalfe, founder of 3Com, on network effects in social network growth.

The recent, rapid rise of Ello, the ad-free, pseudonym-enabling social network that launched in 2013 and is still in beta, is only the latest social network that’s sought to displace the social networking behemoth that is Facebook. A large part of this growth is due to Facebook’s recent crackdowns on pseudonyms on the network.

Despite small, yet exponential growth in a short amount of time, Ello isn’t the only social network boasting features–namely ad-free, privacy-enhancements–that take advantage of Facebook’s innovator’s dilemmaDiaspora*, a decentralized peer-to-peer, ad-free network that’s independently hosted on servers by “pod” admins, was launched in 2010 and has about 50k active users. (n.b.–I joined Diaspora a month after its 2010 launch and have 6 posts, the last of which was 3 years ago.) At that point, Diaspora* tried to seize on a Google crackdown on pseudonyms on its networks. And recently, Diaspora has resurfaced in the news, related to its adoption by ISIS due to its decentralized network. Not to be outdone by the latest Ello buzz in response to Facebook, Diaspora’s Twitter feed has seemed to set out on a PR initiative retweeting people buzzing about the positive attributes of its network.

There’s a problem here for the anti-Facebooks: Metcalfe’s Law. In essence, what Dr. Metcalfe’s quote above implies is that a social network can only be successful if it has enough people on it to spread the network to where its ubiquity outweighs its switching costs. In other words, Facebook managed to amass millions of users because it connected colleges, then alumni, then families, and so forth. At a certain point, the number of connections, as well as the amount of shared information made the user switching costs to other networks extremely high. In particular, other networks tended to emulate the style of Facebook (profile picture, status/microblog/newsfeed, friends list, etc) to a point when Google launched its own social network, Google+, many pundits suspected it was not a Facebook killer. (It wasn’t, but IMO, it served an absolutely different purpose, beyond a generic user profile page: gluing search to social via a user account and threading the data through there.) To date, more than 1.5 billion users are on Google+, but if you ask your friend, he’s not using it. (LinkedIn is very similar, but its tone and features are more career-based, allowing it to thrive more in the professional domain than in the personal domain.)

When Twitter launched in 2006, its medium developed its message through the 140 character limit imposed by structuring phone-based, SMS bursts. Twitter wasn’t about statuses, pictures, profile pages, etc… it was about how much one could say in a short clip of information. Brevity was/is the soul of Twitter’s wit and its eventual adoption was due to the control of messaging, rather than profile information–as was its advertising. New differentiation meant new adoption, albeit at a different pace from Facebook. As journalists started to adopt it to livetweet news as it happened, Twitter began to take on a life of its own. Its conversational nature due to asymmetric network effects started to develop in a different way than Facebook.

When Instagram launched in 2010, its cropped and filtered pictures with longer descriptions and extensive use of hashtags became a new social network that depended on photo sharing and discovery. This was different from posting a photo on Facebook or Twitter, as it overlaid a new dimension of social networking that neither of those networks were providing along with the medium. When Snapchat… When Tinder… You see where I’m going with this? A social network has to provide value to its users and a new social network has to provide enough value to its users wherein “paying” the switching costs of adopting a new network is more valuable than maintaining the sunk costs invested in the current, critical mass network.

So, what does this have to do with Ello and Diaspora? Well, while I admire their missions, their differentiating features aren’t currently identifiable as necessary, even sufficient, for users to “pay” the switching costs. Despite increased consumer demand for privacy, the masses don’t appear to view its value proposition as worth the tradeoff. Without enough users finding reason to make the switch, the network never reaches critical mass. As Morty Seinfeld used to say, “You don’t have the votes.”

Rogers Diffusion of Innovation CurveI’ve early adopted several new media/networks and watched some fail and others succeed–both in the short and long-terms; as a result I have to anecdotally agree with Metcalfe’s view. Users are complacent and mostly hesitant to take on the switching costs of a new network. At critical mass of a current network, broad adoption of a new network that jumps the “chasm” on Rogers Diffusion of Innovation curve requires a disruptive MEDIUM, not just FEATURE differentiation. This is why the networking aspect of Google+ has been dismal (but successful for other reasons) and why Twitter has frequently acted in tandem with Facebook, rather than against it. It’s why the ephemeralism of Snapchat is supplanting Instagram and MMS. It’s why Vine adoption is a struggle compared with Instagram Video.

I haven’t posted on Diaspora in 3 years. I understand it’s decentralized (P2P), but without peers on it, there’s no need to use oEmbed, formatting, or any other feature that makes it anything but “Yet Another Profile/Status Page”. I’ve posted on Ello, but it reminds me of Facebook with GIF functionality (and isn’t that essentially Tumblr?). We marketers are always looking for The Next Big Thing–the newest social media with the most users that will yield the most ROMI. Kudos to Ello and Diaspora for taking advantage of Facebook’s clunky haughtiness, but The Next Big Thing will be disruptive–it has to be. Ello and Diaspora are nice for the marginal niches taking advantage of the incremental feature creep (privacy) of a new network. But at even as they continue to grow, these networks are just plain not disruptive to cross anything more than the small chasm, let alone the big chasm. Ad-free, privacy-controlled networks just can’t get the ‘votes’ of mass adoption.

Still have a lot to learn

One of the people I follow on Twitter is Peter McGraw, consumer psychologist at University of Colorado-Boulder. Aside from some witty tweets and hashtag games on humor (as is one of his central streams of research), McGraw has coined a theory called the “Big Day” theory. It goes like this:

Every day, McGraw gets up, says that today’s a big day, and sometimes, he tweets it.

His theory boils down to three tenets:

  1. Life can be short.
  2. Life can change quickly.
  3. Life should have some urgency.

This pause for thought reminds me of some of the work that I’ve read on mindfulness along the route to my colleagues and I starting a consumer research stream on mindfulness. In a way, this creates a space between the individual and the day. It’s an early inflection point of a priori self-awareness and even present-moment. It’s a simple meditation without necessarily causing anxiety toward the day’s events or future events.

As I’ve been going through this challenging semester and trying (keyword: trying) to grow along the way, I’ve realized that there are a lot of things I still need to learn. I have emotional learning to do, mental/cognitive learning to do, social/political learning to do–and this learning isn’t only on a personal level, it’s on a professional level as well. I need to learn to be a better husband, a better father, a better son, a better friend, a better researcher, a better teacher… all of the roles in my life necessitate that I learn how to improve my skills, talent, and maturity, and move my life to the directions I need/want to head. This is something I partially allude to in the annual Yom Kippur reflection email I send to close family, friends, and colleagues.

A couple of days ago, I saw McGraw tweet the following:

It hit me: McGraw’s possibly innocuous tweet summed up exactly what I needed to acknowledge at the end of my day, every day. I still have a lot to learn. The theory goes like this:

  1. I have to learn from my mistakes, I have to learn from my successes.
  2. I have to learn from the positives and the negatives in my life.
  3. I have to acknowledge that I’ve learned something from the day/past and that I still have yet to learn tomorrow/in the future.

In a way, “still have a lot to learn” is the flipside to “big day”–it’s a simple space between the individual and the day. It’s an inflection point at which self-awareness and present-moment converge, post hoc. And it’s a simple meditation without necessarily causing anxiety from the day’s events or past events.

So, starting this evening, I’m going to start saying that I “still have a lot to learn” and perhaps tweeting it. I’m curious how this actually affects me and whether or not I can actually leverage acknowledging how much I’ve learned and how I still have a lot to learn. Much like the experiments on positive psychology and gratitude journaling, we’ll see if I feel growth from the experience. I’m sure I’ll still have a lot to learn.

The $3bn Snapchat ghost

Last week, Snapchat turned down a $3bn offer to be bought out by Facebook. Snapchat is the rage now, but, like its media, is the rage ephemeral or valuable? It depends how you look at it…

140px-Snapchat_logo[1]Snapchat’s founders were dumb for turning down Facebook’s offer. Here’s why:

1) Snapchat is not a platform. It can’t be. Doing so would contradict its own value proposition.

2) Lack of platform means greater difficulty in creating native advertising–least of all, native advertising that’s worth engaging.

3) The content in Snapchat is not novel, nor unique to that app. This makes long-term product differentiation a bust.

4) The value proposition is in “temporary social media”. That is antithetical to the marketers’ notion of “eyeballs/engagement”.

5) Because the “temporary” aspect of the app is a marginal technological addition to the core of any other messaging service (read: SMS/MMS, FB Messenger, Google Hangouts), and because there are other competitors (WhatsApp, WeChat), there is limited differentiating growth potential.

6) Get someone else to sort out the revenue side.

facebook-icon-app[1]Facebook was dumb for offering Snapchat $3bn. Here’s why:

1) Facebook is looking at new ways to make its site “stickier”. “Temporary social media” is antithetical to that.

2) The fact that teens are flocking to Snapchat doesn’t make its acquisition a panacea for avoiding the fate of MySpace.

3) Snapchat may have 400million daily pictures, but it has no user data. This is a waste for company that’s core competency is building profiles of user data, use it, and/or sell it.

4) Messaging “platforms” aren’t platforms at all. Ever since the cost of SMS dropped to nearly $0.00 (those costs born by telecom carriers, not by internet/data-driven apps) its popularity has been in straight communication. Your own Facebook Messenger should demonstrate that people aren’t looking for bells and whistles when trying to communicate.

5) Until you can evidence the “eyeballs/engagement” anywhere in the model, you’re going to spend a lot of time and headache justifying to shareholders an obscene valuation.

All of this isn’t to say Snapchat isn’t valuable, or that it can’t become valuable in the future (i.e., if it develops an API a la Facebook, Twitter, even Pinterest, contradicting point 1 above and becoming a platform), but to say that $3bn is an outrageous sum of money for something that’s demonstrating marginal differentiating value.

NokiaSoft and Android KitKat’s Exercise in Incongruous Spillover

Just before I fell asleep last night, the headlines started coming across my Twitter feed announcing Microsoft’s acquisition of Nokia. Nokia, which had been one of the global leaders of mobile phone manufacturing, had been performing poorly almost since it made the Symbian operating system open source back in 2009, in a bid to displace Google’s open source Android OS. Clearly, the numbers weren’t what Nokia hoped for (link figures are 2.5 years stale), and in 2011, Nokia abandoned Symbian in favor of a collaborative effort with Microsoft to create a Windows Phone. At the time, Nokia CEO Stephen Elop (former head of Microsoft’s Business Division) infamously tried used a “burning platform” metaphor to describe the strategy shift of Windows Phone to pivot Nokia’s fortunes in the global smartphone share vs. Apple and Google.

For outgoing Microsoft CEO Steve Ballmer, the acquisition of Nokia seems to be anything but lame duck–instead, reinforcing the role of patents in the current tech era (see: Google’s 2011 acquisition of Motorola) and intimately tying Nokia’s fate to Microsoft’s future. If nothing else, the deal also underscores the importance of digital ecosystems in the current tech era. After all, Google is the reigning OS, Apple has brand loyalists flocking to iPhones, and BlackBerry seems mostly confined to Canadians and the odd corporate policy (the latter, now that a lot of companies are looking to BYOD). Tying the Windows OS that runs alongside laptops, desktops, and Microsoft Surface tablet into a dedicated handset division ensures a complete, robust ecosystem akin to that of Google and Apple and essentially putting a nail in BlackBerry’s coffin.

However, also moving ahead today was Google, announcing the release of Android 4.4: KitKat. In an odd twist from generic dessert names, Google not only tossed out the expected Key Lime Pie, but also moved into the co-branding space with Nestle.

To top it off, Google co-branded with Hershey, not only in a trademark aspect, but also in a full-on promotional aspect as well. Worldwide, Kit Kat is offering a chance to win a Google Nexus 7 and Google Play credits.

Now, Google claims Kit Kat is its employees’ “go-to snack” and Kit Kat claims that Google is vibrant and young. While Android may be pushing the other smartphone OS developers to do more in the way of innovation (read: Apple and Microsoft), it’s never really been a “vibrant” brand and, in fact, may not have as much mainstream brand cache as do the OEMs. So let’s discuss the implications of the Android KitKat brand alliance from a little deeper perspective.

Simonin and Ruth (1998, p.40; Baumgarth 2007) researched brand spillover effects, finding evidence of the following that are germane to this discussion:

  • When two highly familiar brands ally, they experience equal spillover effects
  • Both product fit and brand fit significantly affect attitudes toward the alliance
  • The impact of product and brand fit on the core brands is mediated fully by the alliance
  • When two highly familiar brands ally, both contribute equally to the alliance
  • Product fit and brand fit moderate neither the contribution of the brands to the alliance nor the spillover effects of the alliance on the core brands

Indeed, Walchli (2007) indicates brands in an alliance must be “moderately congruent on some dimension” and that in instances of lower congruity, reference to the “special capabilities and contributions of the brands” should be made. And James, Lyman, and Foreman (2006) suggest that higher levels of congruity lead to higher purchase likelihood of a co-branded product. Finally, Gammoh, Voss, and Chakraborty (2006) find an interaction between cognitive evaluation and message strength, moderated by a reputable brand ally. In instances of low cognition and high message strength, the brand ally matters simply as endorser; in instances of high cognition and low message strength, the brand ally becomes an information cue instead.

So what is the Android KitKat experience? Do people understand the Android brand? How does  Google come into play? I suspect that, although there is significant fit in the fact that Google has traditionally named Android releases after desserts, there is a larger effect on the Google Nexus 7 device that Kit Kat will be promoting on its candy bar packaging. By the time the contest ends, consumers should expect (but likely won’t care) that the next generation of the Nexus 7 is running Android KitKat. Most confusing is the limited congruity between Android and KitKat–primarily in the candy bar promotion. The android has little fit with Kit Kat to improve purchase likelihood. It has  low cognitive evaluation and low message strength to play an endorsement, and there’s limited contribution of the android to Kit Kat, as opposed to the other way around.

A quick search fails to find market research on consumer awareness of Android OS names, however informal research (aka, asking around) seems to indicate that early adopters and tech enthusiasts are more aware of version naming (and conventions), while the majority of consumers couldn’t answer if they’re running Gingerbread or Jelly Bean. If that’s true, Android KitKat may help Kit Kat sell more candy bars, but it’s highly doubtful that Kit Kat will help sell more Androids.

mTurk: Method, Not Panacea

Lately, a bevy of articles has come out describing the pitfalls of Amazon mTurk research (see “Mechanical Turk and Experiments in the Social Sciences”, “Don’t Trust the Turk”, et al.). These articles have a prevailing POV that generalizes problems with mTurk research. A layman read presents a gross assumption that assumes researchers using mTurk view it as a panacea to research. I, however, am of the mind that this view will rapidly become a stale and stodgy view as the research of mTurk (e.g. ExperimentalTurk) determines the benefits of research on mTurk outweigh its shortcomings.

In his post, for example, Gelman talks about how mTurk allows for large sample sizes that virtually always lead to statistical significance. “…[P]apers are essentially exercises in speculation, “p=0.05” notwithstanding.” Well yes, this is true. A sample of 2,000 will likely yield a significant result. However, an understanding of basic statistics and sampling distribution dicates large sample sizes shouldn’t be used regardless of their availability, as they demonstrate limited generalizability of a phenomenon.

This example is not a problem endemic specific to mTurk–this is a problem endemic to poorly designed research. (On a tangent, I’d like to know how a published paper in a decent journal printed the line: “As reflected by the size of F value and its p value of the result, the difference among the three countries was not as significant as that among the three segments.” This has nothing to do with mTurk–just highlighting that poor research is poor research.) Technology is not to blame for poor research–the skills of researchers and reviewers who publish the research play a role in this.

The fact is, those who are keen to mTurk and its potential recognize that mTurk is both a vehicle for the research (a “wrapper”, if you will) and part of the design itself. Those who are keen to mTurk recognize that it is not a panacea. We recognize that it has its limitations (see How naive are MTurk workers”), but accept those limitations as part of the research design itself and either work within the constraints of the method or acknowledge such limitations. Take for example, the switch from paper surveys to online surveys such as Qualtrics (e.g. Greenlaw and Brown-Welty 2009 on mode of survey presentation). Factors like page breaks, questions per page, “Front/Back” buttons (and their presence or lack thereof), pagination, etc., are all relevant to design, yet frequently overlooked. As researchers, we still need to recognize their importance and limitations; we need to recognize where we make tradeoffs and justify why those tradeoffs are made.

That’s the thing with social science research: it has limitations. When was the last time we heard someone say:

“Let’s toss experiments out because we can’t control for some incidental confound!”
“Let’s toss surveys out because we can’t control for self-reporting!”
“Let’s toss grounded theory out because the author subjectively influences the outcomes!”
“Let’s toss student samples because we can’t get external validity!”
“Let’s toss out national samples because we can’t get internal validity!”

We don’t. We haven’t. (Well, except maybe the last two–see: McGrath and Brinberg 1983) We acknowledge why we’ve used a particular method, we acknowledge what outcomes we find with that method, and we acknowledge that the method has potential flaws that may be explored in the future. MTurk is no different in this regard. And as the technology improves and becomes more sophisticated, taking advantage of APIs (e.g., TurkCheck, behavioral research will adapt accordingly.

mTurk is both a vehicle for research and part of the design itself. Researchers who don’t “get” that it is both of these aspects will fail with it, all around. The rest of us who are trying to specialize in it are finding not a panacea, but a totally different outcome that is workable within the methodological paradigm (see: Berinsky, Huber, and Lenz 2012). This is a subject worth mooting, but in the end, will be moot.

[Disclaimer: I used mTurk in my dissertation and in a couple of other studies and, based on my own findings, am planning to follow advances in mTurk technique to the best of my technical skills.  There are advantages to being able to get reasonable sample sizes (ie, just-specified structural equation modeling) and my data over multiple studies implies fairly representative demographics across samples. In one study, for instance, participants were randomly assigned to one of four conditions--no significant differences were found in demographics across conditions, self-report bias notwithstanding.]

Twitter and the Walled Garden

In August 2012, Twitter announced the launch of API 1.1, an API suite that would be far more marketer-friendly. For good reason (namely the need to monetize the platform), Twitter needed to appropriate its API for inclusion of paid tweets and add more robust tracking, targeting, and implementation of marketing campaigns. This included the following measures:

    1. Required OAuth authentication on all API points
    2. API rate limiting
    3. More sophisticated “developer rules of the road

Additionally,  Twitter imposed a 100,000 token limit on existing, third-party Twitter clients.  Unfortunately, as full implementation of Twitter’s API 1.1 concludes today, it is worth understanding how the new API brings Twitter roundabout to the consumer experience, turning it into a “walled garden” akin to Facebook.

To understand the draconian effects of the new API on consumers is to understand the history of Twitter. Twitter was launched in 2006 at the SXSW festival. As an early adopter of many internet technologies I opened my first, personal account almost exactly a year later (putting me in the first 0.22% of Twitter users). So I’ve seen Twitter through server crashes, API rate limiting, fail whales and the likes. The early stages of Twitter existed before the deep penetration of smartphones and app proliferation; Twitter largely existed through SMS and the web as a way for a user to send mass messages to his network.

Additionally, unlike Facebook “friends”, Twitter didn’t require the reciprocated connection for information to be disseminated. The early versions of the Twitter API programming were  kept relatively open, allowing developers to build third-party desktop clients, web-based clients, and eventually, more smartphone clients. This gave users extreme latitude in freedom of choice in how to navigate the Twitter experience–that is to say, short 140 character bursts  to the user’s network. Different clients have had different features that users have adopted as they prefer–ease of use, accessibility, aesthetic, use of columns, search features, display of external content, refresh speed, and preferences.

The infrastructure was rather unsophisticated and crashed frequently–particularly during peak usage periods. For this reason, Twitter instituted its API rate limiting, which meant that any time the Twitter servers were pinged(requested) outside of SMS/twitter.com/the official Twitter app would count to this rate (June 2007 – 1440 requests in 24hrs, May 2008 – 70/30/70 requests per hour, July 2008 – 100 unauthenticated requests per hour, January 2009 – 20,000 requests per hour, June 2010 – 350 authenticated requests, 175 unauthenticated requests per hour, March 2013 – 180 authenticated requests per hour). These requests may be Twitter searches, profile views, embedded content views, Tweetstream refreshes,  and so forth. Multiply that by the simultaneous use of clients on multiple devices and the API limit can be reached rather quickly on third-party clients (e.g. I use HootSuite on my laptop, Twicca on my phone and tablet, and Falcon Pro on my tablet as well).

It becomes evident that, aside from the 100,000 token limit on third-party Twitter clients, Twitter has also closed the wall around third-party API requests, putting them back to nearly 2008 (n.b.– in 2008, Twitter had 3 million registered users; in 2013, Twitter has 500+ million registered users). For Twitter “power users” who frequently view profiles, refresh their streams for breaking news, use outside apps to post/pull from twitter, etc, this is a devastating change that conflicts with Twitter’s growth trajectory and technical scalability… unless users use twitter.com or the official Twitter app… those don’t count toward API requests. Rate limiting killed Coca-Cola’s use of Twitter for a multiscreen advertising campaign during the Super Bowl, as Coca-Cola doesn’t use twitter.com.

To put the nail in the coffin of consumer choice, Twitter yesterday announced the retirement of its TweetDeck desktop and app support (n.b.–TweetDeck started out as a third-party client until it was purchased by Twitter in May 2011, making it a viable alternative with more features and functionality than the official Twitter client). This move, along with the API rate limiting and the 100,000 tokens, officially launches Twitter’s venture into completing the wall around the garden. Effectively, it has pigeonholed any external development around the Twitter platform–development that led to the company’s explosive growth–and near mandates users to use its own software. The sacrifice of TweetDeck is nothing, if not symbolic of this shift.

I mentioned above how this is great for marketers; rather than risk third-party work-arounds for API 1.1, Twitter tightens the reins in advance of a long-speculated IPO. It demonstrates via Promoted Tweets and Promoted Trends that it can effectively monetize social. It forces any remaining developers to ensure a consistent experience, inclusive of the advertising that brings in significant revenue. And, as I teach my Internet Marketing students, it’s reaches a significant amount of eyeballs–possibly more so than Facebook display advertising.

This could have all been strategized in a more flexible manner, rather than in a muscle-flexing manner. Rather than bring 500+ million users in 2013 back to the system of 2008, Twitter could have easily required new developers to integrate the advertising software (perhaps in exchange for a proportion of revenue). It certainly didn’t need to lower the API rate. And it didn’t need to limit 100,000 tokens per developer. Twitter grew its ecosystem , the result of an open architecture system; it is killing that system in favor of Wall Street and at the expense of the user. Like Facebook’s IPO, Twitter API 1.1 may be the move that jumps the shark for its most devoted users.

Dissertation Defense Day

2pm, February 7 2013- “Why Do Consumers Consume Prosocially? The Equity Exchange Theory of Marketing”

A collection of GIFs I made on tumblr.

Before the defense I practiced hand-tying my own bow tie:

Just before:

Awaiting the verdict (pic courtesy of Bruce Weinberg):

Doctor.

Spencer M. Ross, Ph.D. (pic courtesy of dad)

Spencer M. Ross, Ph.D. (pic courtesy of Bruce Weinberg)

With my advisor, George R. Milne (courtesy of George Milne):

Post-defense celebration with the wife and daughter (courtesy of dad)

Violation Transportation and the GoDaddy “Perfect Match”

Last night’s Super Bowl featured an interesting ad, sponsored by GoDaddy.com. GoDaddy, a perennial envelope pusher in what some may deem as mysogenistic advertising featured a :30 spot called “Perfect Match”.” In the ad, the “sexy side” (represented by Israeli supermodel Bar Refaeli) and the “smart side” (represented by “Walter” (Jesse Heiman)) come together for a 10 second liplock, complete with hypercontexualized sound effects. An interstital claims “when sexy meets smart, your small business scores” (double entendre implied). An extended version, featuring 30 seconds of kissing with tongue and rock and roll background music, was also made available via the GoDaddy web site.

So after watching the commercial, you may be thinking two things: 1) Ew, gross! 2) What does GoDaddy get out of that commercial if everyone is turned off? Is any buzz really “good” buzz?

USA Today rated the ad a 3.30 out of 10, placing it in the bottom five ads of Super Bowl evening, while the BrandBowl rated it 9 out of ?, placing it at #17 out of 43 ads, with increased sentiment of 64%. Around the internet and through all the MMQBing, GoDaddy was thought to have one of the worst Super Bowl ads (falling in line with Thought 1). But perhaps the ad was more genius than previously thought (falling in line with Thought 2). Forget the fact that we’re still talking about it the next day–here’s why I speculate this is so.

Peter McGraw, humor researcher at University of Colorado Boulder, has done research on a phenomenon called “benign violations”. These benign violations demonstrate some sort of normative deviance in which we are threatened by what we “ought to be” as long as the threat is benign. The Benign Violation Hypothesis necessitates a situation be “appraised as a violation” (model and geek makeout session in a Super Bowl ad), “appraised as benign” (there is distance between us, the viewing audience, and the offending parties, of which the offending parties have no direct impact), and “these two appraisals must occur simultaneously.”

Alright, so would speculate a benign violation here. We all went “ew”. But what about the real-time viewer polling that showed a sharp increase in positive male sentiment (10 to 63%) versus female sentiment (10 to 24%)?

A line of research on transportation effects in advertising (Escalas 2004; 2007; Green and Brock 2000; Phillips and McQuarrie 2010) points to a phenomenon where consumers use advertising to construct mental representations of themselves in the context of the advertising. In particular, Escalas’s work assumes that consumers are “transported” into the realm of the advertisement, so that they mentally construe themselves in the role of the narrative’s main character. Phillips and McQuarrie take this notion further, suggesting that we reflect ourselves in the advertisement like a metaphoric mirror.

This starts to make sense now. Yes, the GoDaddy ad features a benign violation, but it also features a transportation effect. This effect has particular appeal to men–the typical target of a GoDaddy advertisement. So instead, GoDaddy splits the middle: on the one hand, we may all agree on the benign violation, but males (as evidenced by the polling) are willing to put that violation aside in favor of transporting themselves to a salacious lovefest with Bar Refaeli. For males, the momentary discomfort lapses and “what if I was Walter?” sets in, while females are left hanging uncomfortably for the commercial to end.

Indeed, it would be interesting to learn what traffic to GoDaddy’s website was, following the ad’s premiere–especially among different demographics. At the very least, the short version of the YouTube clip is nearing 7 million views, while the extended version is nearing 190,000. Best Super Bowl ad? No. But it’s not the writeoff that many have proffered.

Microsoft: Think Different

Today, Microsoft just unveiled a new logo–it’s first redesign in 25 years. This coincides with the impending launch of the Windows 8 operating system this fall, but perhaps has greater symbolism: it revitalizes one of the world’s largest tech brands that has been recently displaced by both Apple and Google.

At first glance, I wasn’t sure what to make of the redesign. But that actually changed in the span of 30 seconds, when I then caught the YouTube clip unveiling the logo. A quick re-contemplate later and I actually think the new logo is an intelligent move by Microsoft, which operates not only in the consumer domain, but heavily in the enterprise domain as well.

Recent research by Walsh, Winterich, and Mittal (2010, JPBM) suggests drastic redesign may have a negative impact on strongly committed, brand loyal consumers. Followup research (Walsh, Winterich, and Mittal 2011, JCM) indicated logo redesign affects brand attitudes, with particular respect to self-construal. Additionally, Müller, Kocher, and Crettaz (2011, JBR) used both experiments and structural equation modeling to determine a positive relationship between logo redesign, brand modernity, and brand loyalty. Of four dimensions of logo redesign (attractiveness, complexity, appropriateness, and familiarity), only logo attractiveness and logo familiarity had a significant impact on logo attitude. Indeed, they write, “for the IT sector, when similarity between the old and the new logo is high, respondents seeing the new logo rate brand modernity higher than those seeing the old one.”

Here’s how I see the new Microsoft logo redesign being fresh, yet attractive; modern, yet familiar:

1) This is Microsoft’s first logo redesign in 25 years. Radical thinking can backfire when revising a stably accepted logo (e.g., the Gap logo backlash remains a current, contemporary case study in logo redesign failure. Its difference in aesthetic clashed with various brand dimensions and was quickly reverted to the original logo). Keeping the new Microsoft logo consistent with the brand is critical.

Alternately, a recent Starbucks logo redesign had a more thematically consistent change:

2) The original Microsoft logo doesn’t have the Windows symbol in it–only the logotype. This new logo incorporates the most ubiquitous Microsoft icon: the Window. And yet it does so in an understated way, the Windows symbol is just to the left of “Microsoft” logotype, enabling it to be used by itself. Rescaling either the symbol or the logotype would still remain recognizable.

3) The four window tiles tie together prototypical core Microsoft elements: Windows (blue), Office (red), and XBox (green). The debate will soon rage over what “yellow” represents. The tie-in with the Microsoft elements becomes clearer when watching the 30s YouTube spot.

4) The simplicity of the symbol’s Windows squares evokes the simplicity and consistency of the Windows 8/Mobile [not]Metro UI that’s being launched all throughout the Microsoft brand ecosystem.

5) The ‘Microsoft’ logotype seems to have an appropriate kern that looks contemporary (also consistent with the Segoe-based fonts Microsoft is currently using in its ecosystem’s UI: XBox, Bing, Windows, etc….).

And for comparison’s sake, Apple never really evolved its logo too far from its core either. The rainbow-themed apple was used from 1976-1998, a monochrome-themed logo was used from 1998-2001, an Aqua-themed logo was used from 2001-2003, and a glass-themed logo has been used since 2003. (In fact, one might add, Apple was formally known as Apple Computer, Inc. until 2007, when it finally dropped “Computer” in an 8-K SEC filing.) In short, one of the two tech brands more valuable than Microsoft has marginally changed its own logo during the course of its history. Three-apple-logos

This is not all to say that the new Microsoft logo is perfect. For example, while it may be a more contemporary version of what Windows has been doing for a while, and it may mimic Google Chrome, it doesn’t point Microsoft in a direction of anything future-oriented but Windows. That implies that for the foreseeable future, Microsoft is hanging onto its core product, even as Apple and Google start to encroach on desktop(laptop) and mobile devices. At the very least, it’s not only modern for the consumer and corporate user, it’s modern for internal marketing as well. And who doesn’t like to feel rejuvenated?

Mindfulness and the device paradigm

For the past year, Paul Miller (The Verge) has been engaging in a ‘radical living experiment’, going without the internet for an entire year. The irony of this experiment is that Miller writes for a publishing company that deals extensively with technology news. over the past three months he’s been disconnected from the internet, he’s adapted to things like reading books, engaging in conversations at coffee shops, using a telephone… The things we used to do in the 1980s before the internet hit the mainstream.

In his latest post on The Verge (Miller still uses an internet-disabled computer to write, prints his stories, and gives them to his editors), Miller shares with us his goings on of the past three months. In particular, he writes that there is a difference between ‘disconnecting’ and ‘disconnected’–despite his ability to adapt to life without internet, the realization is greater that being in the present moment requires knowing not just how to do it, but also why to do it. One point that Miller expresses is that “There’s still nobody on the computer waiting to love me, and I just have to deal with it.”

It’s almost as if this experiment has helped Miller re-engage with what Albert Borgmann (1984) called ‘focal things and practices’. These were the things that required practice to help create intimate connections with objects. Over the past several years, we’ve used the internet (and social networks, in particular) to try and improve social relationships. And yet, the device paradigm has superceded our ability to foster relationships without anything but the computer. Instead, we’ve relied on an intimate connection with our computers to do the bidding of our social relationships. And when we realize that we have 1400 friends who could care less if we’ve ‘disappeared’ for a month or three months or so, we do feel inconsequential.

As Miller learns though, what the internet giveth–that is, to say, an easy fix to boredom and the perception of alleviated social stress–the internet also taketh away our ability to know exactly what those focal things and practices are. When faced with the present moment, we don’t know how to occupy it with just ourselves.

Some recent research by my colleagues and I (forthcoming at Journal of Public Policy and Marketing) found that mindfulness practice helps fulfil this deficiency in our understanding of how to use the present moment. A common substitute for stress relief–eating disorder–can be aleviated by implementing formal mindfulness practice in daily life. Additionally, we found mindfulness also reduces stress levels. In a society that typically looks at solutions to fill time, mindfulness ends up filling purpose. Yes, the computer could possibly be used as a focal object, but if we forget why we’re using that object, our reliance on it only intensifies further. Mick and Fournier (1984, JCR) also find that an avoidance strategy to cope with the “technology paradox” is no better than a confrontative strategy to reduce stress and conflict caused by the paradox.

Indeed, I suspect Miller is starting to find the balance in his life. I suspect he is gaining more of a sense of purpose as he grapples with his inconsequentialism. I suspect he will have a greater appreciation for the focal thing and practice his computer lets him accomplish. And I suspect that this will achieve more for his ability to know why it’s “time to get back” versus when/how  it’s time to get back.