Sunday, August 21, 2011

Tool: Moowheel (A Javascript Connections visualization tool)

Intended Use: Moowheel is a Javascript visualization tool used to display connections between different objects like people, things, places etc. This library provides a unique way to visualize data using Javascript and the Canvas Object.

Authors/Owners: This tool is authored by Joshua Gross and is licensed under an MIT-Style License.

Requirements: This tool requires CanvasText(for text support with canvas object) and EXCanvas (for IE Canvas Support) libraries to support it.

Usage: The usage of Moowheel tool is very simple. In practice it can be used to rank the connections among a group of people based on their weight or it can be used to provide a mapping between two different sets of data.

* In the first step we include necessary Javascript files in the header of the page.

* In the second step we create an array of items for the connections.

* In the third step we add a container for the canvas tag to the body of the document

* The final step is to initialize the MooWheel object



References:

MooWheel: A Javascript visualization Library

A Sample Visualization example using MooWheel

Viz: Number of endangered languages by country

This graph shows the number of endangered languages of the world by country and the degree of endangerment. The data is taken from UNESCO Institute for Statistics.


via chartsbin.com
The graph has been created using chartsbin which is very helpful in creating rich interactive data visualizations using user specific data sets. The tool is powered by various open source libraries and tools. The map data processing is done using tools written in Java and C. The other tools used for visualization are based on Adobe Actionscript and JavaScript libraries.

The size of the circle represents the number of endangered languages for each country. The data is represented simply using a single color and thus increasing the readability of the graph. The graph is interactive where hovering over the graph show the total number and the division of languages based on the their degree.

The above map shows the same data visualized differently. Here the color of the pointers are shown differently to indicate the degree of extinction. The UNESCO Atlas of world's languages in danger shows the distribution and can be searched with various options like the degree, name and country.

The Atlas of world's endangered languages can be found here.
http://www.unesco.org/culture/languages-atlas/

The two representations have their own perspective. The first one is clear and is visualized in a higher level perspective whereas the other way of representation deals with details about everything related to the graph and details can be filtered from various options.

References:
  1. UNESCO Atlas of world's Endangered Languages(http://www.unesco.org/culture/languages-atlas/)
  2. Charts Bin (http://chartsbin.com/)

Data: NOAA Freeze/Frost and Growing Season Data

NOAA Climate Data
Free Data Link
Actual Free Data from Above

The National Oceanic and Atmospheric Administration (NOAA) is a US government agency focused on oceanic and atmospheric data. They keep rather large data sets from several other agencies beneath them, including the National Weather Service, the National Ocean Service, the National Marine Fisheries Service, and more. The National Climatic Data Center is the storage facility for 99% of the NOAA data, which includes 1.2 petabytes of digitized data (1).

There is a great deal of useful data in the above links to plow through, but at least one good data set is here:
Data Explanation Document
Freeze/Frost Maps (derived visualization)

In short, the data in the above second link above is in an ASCII text format.
The statistics in the data set were computed from data collected by 4,346 stations between the years 1971 and 2000. There are 15 space-seperated columns of data for each of 3,578,228 lines, however the first column represents 3 different values without any spacing between them. Those values are state code number, station code number, and a division code number. The explanation document mentions one essential companion document, which is used to get station metadata (e.g. name, location, elevation, etc.). It is linked from the Freeze/Frost Data page above in PDF form.

Column 2 represents the temperature threshold for which the freeze date in that line was calculated. The dataset uses 6 different temperatures to compute the freeze dates, all in Fahrenheit measurements: 36, 32, 28, 24, 20, and 16.

Column 3 is the freeze season character, where '1' is for a late Spring freeze, '2' is for a first Fall freeze data, and 3 is the growing season length.

Columns 4 through 12 are, for freeze season characters 1 and 2, the freeze dates for the given temperature threshold in column 2. The columns are individually the freeze probability of 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, and 90%. If the line is for freeze character 3, then these columns are the growth season length computed with the given temperature temperature.

Column 13 represents the number of years in which the specified freeze temperature threshold was reached or exceeded in the 30 years of data.

Column 14 contains the mean number of days of occurrences associated with the given freeze threshold. I'm not sure if that means that the exact specified temperature occurred or not. Nevertheless, the data there actually has an implied decimal point that would need to be accounted for when reading in the data.

Column 15 is the standard deviation of the mean found in column 14.

I believe this data was used to the Freeze/Frost map visualizations linked above. The explanation document mentions derived maps, but it does not give a link to them, so I assume those are it. I might write a visualizations article with those maps.

Finally, the explanation document mentions the obvious, that these freeze/frost data are useful for the agricultural industry.

Viz: Thinking Machine 4 ( Chess Visualization )


Thinking Machine 4 is visualization of chess game engine. The chess engine is simple and uses only basic algorithms from the 50s (alpha-beta pruning and quiescence search). This visualization tries to portray how computer thinks (in this algorithm) and shows the moves it is computing.



In this after every human move thinking machine shows how computer is trying to calculate potential future moves.It is shown with the arcs of different color.The brighter arcs are thought by the program to be better.

It also shows the wave of influence for each piece on board.This shows the effective attack range of that piece.(Which could be useful for very naive new player)








Its simple and interesting visualization and shows very interesting moves while playing it.This can bee really useful in judging and developing game play for chess. However as there are so many arcs one easily loose the track and it looks a bit cluttered.


Technologies Behind:
  1. HTML
  2. JAVA-APPLET
  3. PROCESSING(for graphics)
Its bit different from other visualization on terms of technology as it uses applet and lot of server side computation.

References:

Viz: NHL playoffs - Stanley cup



Stanley cup visualization captures all the important information regarding the play-offs in NHL dating back to 1927 - the year in which the NHL was formed. It makes a very good attempt of giving a graphic representation using HTML5. With minimum number of mouse clicks by the user, all the information regarding the teams participating in the play-offs, the number of times each time has appeared in the play-offs and whether it won or lost and the time-line of the team are presented. Minimal number of clicks by the user plays an important role in good UI designs as it involves lesser effort from the end user.

The data set required for generating this visualization can be obtained by running data parsing scripts on web based data. The data set for the Stanley cup can be found here.



According the creator of this visualization, the steps involved in building this visualization are as follows:
a) Start with simple HTML markup.
b) Build a timeline with Javascript.
c) Add CSS.
d) Add interactivity.
e) Draw the canvas.

The detailed HOW-TO of the visualization.

Here is the link of the visualization - http://vis.robbymacdonell.com/stanley-cup/

Pros of the visualization -
a) Clear differentiation between each teams participation in the cup and its result represented in blue (won) or red (lost).
b) A text description in the background describing the visual representation presented upon a user's mouse-over action.
c) Only when a mouse-over is performed on the years listed, an action is performed mapping to at most two teams, thus not crowding the visual.
d) Different colour used for representing the time-line and each event mapping to one of the years listed at the bottom.

Possible improvements -

a) The data set which involves the teams participating in the play-offs could have included the scores too.
b) The text representation in the background upon a mouse-over action can be presented with better colours.
c) Upon mouse-over on a teams name, the total number of play-offs won and lost can be presented.
d) Some teams do not have a time-line, either this maybe a faulty action or the data set did not contain records pertaining to that team.

JavaScript and HTML5 continue to be a stronghold for providing good visualizations by taking a data set as an input. This visualization gets a 9.5/10 rating from my side..

References:
a) HTML5 as a visualization resource, http://www.visualisingdata.com/index.php/2011/04/should-html5-be-considered-a-visualisation-resource/.
b) NHL play-offs data set, http://www.nhl.com/cup/champs.html.
c) Robby Mac Donell's HTML5 Stanley cup visualization, http://vis.robbymacdonell.com/stanley-cup/.
d) Visualizing Lord Stanley's Cup: an HTML5 experiment, http://robbymacdonell.com/blog/visualizing-the-stanley-cup-finals-with-html5.

Viz: Tweetcatcha

This visualization shows how NY Times articles travel through Twitter after being published. It visualizes the tweets resulting from the latest news articles that appeared during the bygone 24 hours on the New York Times website.


Tweetcatcha uses the New York Times Timeswire API to load the latest news. The API is used to get links and metadata for Times articles as they are published on NYTimes.com. The BackTweets API then searches for the relevant tweets using the title and URL of those articles.
Articles are arranged in a circular manner and each article is represented in the form of a small clickable bar, which shows the total tweets pertaining to that article and a little description about it. There is an option for reading more about that article which takes you to the actual link of the news. The other option, “Zoom In” takes you to a page, which has 24 rings indicating an hour difference from the first ring till the last one. On or between each ring there is a small cloud, which on clicking pops up a small box showing a tweet. The position of tweets is based on the time difference between when the article was published and when it was tweeted.
The data for this visualization was collected from November 13, 2009 until February 9, 2010. This visualization uses a database of 107 MB with 15, 327 NYTimes articles and 311,885 tweets related to those articles.
The visualization is useful to people who are interested in the popularity of a particular news (which is shown by the number of tweets tweeted). It makes understanding the timing and density of tweets pretty easy. It also shows within 24 hour which news was most popular. It interests people who are active in social media and how it grows. It becomes easy for them to track the trends and behaviours.

VIZ: United States population change

The United States Census Bureau has presented to us a good visualization example of population change in USA over the last 100 years ranging from 1910 to 2010. It basically consists of the population change, the population density and the apportionment. It is a highly interactive web visualization example and effectively presented so as to remove any kind of ambiguity while viewing it.


In the population change section of this visualization, the change is depicted as the percentage change over every 10 years of census. By using mouse hover events, you can immediately see the change in population and the trend from east coast to west coast.
For the population density part you can visualize similar to the above section and see the population density and which parts of US are crowded.
Lastly, you can see the apportionment which means the allocating the seats in Parliament to the 50 states in USA based the population census.
Overall what I feel about the visualization is that the 'interactive' part is quite impressive. You can easily understand the trends by just hovering the mouse over the states. An important thing to note here is that in all the 3 sections they have kept the mouse hovering event consistent and used good color coding scheme. I would rate this viz an 'A'.

VIZ: United States population change

The United States Census Bureau has presented to us a good visualization example of population change in USA over the last 100 years ranging from 1910 to 2010. It basically consists of the population change, the population density and the apportionment. It is a highly interactive web visualization example and effectively presented so as to remove any kind of ambiguity while viewing it.


In the population change section of this visualization, the change is depicted as the percentage change over every 10 years of census. By using mouse hover events, you can immediately see the change in population and the trend from east coast to west coast.

For the population density part you can visualize similar to the above section and see the population density and which parts of US are crowded.

Lastly, you can see the apportionment which means the allocating the seats in Parliament to the 50 states in USA based the population census.

Overall what I feel about the visualization is that the 'interactive' part is quite impressive. You can easily understand the trends by just hovering the mouse over the states. An important thing to note here is that in all the 3 sections they have kept the mouse hovering event consistent and used good color coding scheme. I would rate this viz an 'A'.

Tool: Javascript InfoVis Toolkit


The Javascript InfoVis Toolkit is a free and open source javascript library that is able to create many different forms of web visualizations. It features over 10 different visualization types and can be easily integrated into existing sites. By heavily relying on HTML 5 it depends on the user using a modern browser.

Some examples:

Bar Chart: http://thejit.org/static/v20/Jit/Examples/BarChart/example1.html
Starburst Graph: http://thejit.org/static/v20/Jit/Examples/Sunburst/example2.html
Browsable Tree: http://thejit.org/static/v20/Jit/Examples/Spacetree/example1.html

Usage Example:

The following shows a simple pie chart being created in javascript within the "infovis" container:

var pieChart = new $jit.PieChart({
injectInto: 'infovis',
animate: true,
offset: 30,
sliceOffset: 0,
labelOffset: 20,
type: useGradients? 'stacked:gradient' : 'stacked',
showLabels:true,
resizeLabels: 7,
Label: {
type: labelType, //Native or HTML
size: 20,
family: 'Arial',
color: 'white'
},
Tips: {
enable: true,
onShow: function(tip, elem) {
tip.innerHTML = "" + elem.name + ": " + elem.value;
}
}
});
pieChart.loadJSON(json);


This library is written by Nicolas Garcia Belmonte, has been developed for the past two years, and is still fairly well maintained. Its home page is located at http://thejit.org which provides many well described examples and extensive api documentation.
VIZ: Voyage- RSS Feed Reader
Voyage is a web-based RSS feed Reader which allows its account holders to add RSS feed links, performs syndication and aggregation. It benefits readers by letting them subscribe to timely updates from favored websites and display feeds from many sites into one place. It supplies users with a list of current news items in a brief format so they can scan headlines and choose the items that interest them without visiting the Web sites that publish them. It provides the pleasure of reading all feeds that a user subscribes to, at one place regulated by a timeline.
Visual mapping is 3-Dimensional between the RSS feeds that float in space in varying layers and the timeline that provide direct access to a specific layer that represents time duration. The stories in focus are recent and easier to read, and they get smaller and blur as they go off into the distance. Clicking on a headline will zoom to that layer and expand a story summary with link to detailed description. Dashed Timeline has a three-color scheme depicting density of RSS feeds. Precisely, brighter the color of slot, higher the number of RSS feeds during that time interval.
I believe this visualization very interesting and informative. This 3D tool has an interface which is very interactive and dynamic. User can customize the application to stay informed in their interest areas. Also, latest RSS feeds from different user-subscribed websites floats at the top thus save time spent in browsing through data. People who are very busy at work but still wish to keep up with current happenings around, may find this visualization very effective.
Please try Voyage here
References:

Viz: Flow map of transportation of passengers in Ireland & World Sex Ratio 2011

Accompanying the report on a railway feasibility study for the Irish Railway Commissioners in 1837 was an atlas which contained maps "drawn to a new design" by Lieutenant Henry D. Harness of the Royal Engineers. The maps showed lines whose widths were proportional to the figures being represented, in this case the average number of people travelling each week between two points (with the data being supplied mostly by the local police).

Harness presented 3 maps which have the following salient features:

1. Map showing the relative number of passengers in different directions by regular public conveyances,
2. Map showing the relative quantities of traffic in different directions,
3. Map showing by varieties of shading the comparative density of the population,

Harness's maps featured in a presentation on the advancement of science prepared for the Statistical Society of London in 1838 by W.R. Rawson. They were referred to as "beautiful maps, which place before the eye a picture of the country representing the traffic of its population".

Sankey diagrams, named after an Irish engineer, Matthew Sankey, who published a diagram in 1898 showing the energy flows in a steam engine is considered to be the first of its kind of the diagrams which make it easy to see the dominant flows within a system and essentially help us to visualize the data.

From the above post, it appears that first representation of data in this way was even earlier than Minard's diagram from 1869. Harness diagrams are evidently first of its kind, which illustrates the flow map of the passengers in the Ireland railways.

Well, that is the example from the history of data visualization, which is one of the milestones helped in setting the trend of visualizing complex data patterns and flow of data.

Following is an interactive data visualization example, which visualizes world sex ratio for total population and also several age categories.

Try the above chart, by hovering the mouse over different countries, and also select the various options in the dropdown box, to change the age category. There is also a "key" given to interpret the color coding.

I think its an excellent example of data visualization. It is amazing how so much amount of data, is elegantly organized, such that users can find what they want, by just hovering the mouse on the country they are interested in. Not only that, the above chart also gives a quick perspective of the distribution of sex ratio across the world by making use of color coding.


References:
  1. Economist Data Visualization, http://www.economist.com/blogs/dailychart/2011/07/data-visualisation
  2. Robinson, A. H. (1955). The 1837 maps of Henry Drury Harness. Geographical Journal, 121:440-450
  3. InfoVisualization Wiki, http://www.infovis.info/

Viz: What online marketers know about you

Spy on you !
The study conducted by Wall  Street Journal by selecting the 50 most visited U.S. websites reveals that marketers are spying on Internet users.

When every user logs onto the internet to visit one of their favorite websites, he/she triggers hundreds of electronic tracker files like cookies, beacon , flash cookies etc which send information about the user to the companies.


Please find the link to the visualization here.

In the image the top half depicts Websites and bottom half represents tracking companies. When rolled over a website , lines come out of the website which connects to the tracking companies which collects information about users. Similarly if rolled over the tracking company it will show which all websites are tracked by them. The color codes in the lines says if it is  first-party tracker files (yellow) or Third party tracker files ( variations in blue for cookies, beacons and flash cookies). The color code (variations of red) on the companies shows the exposure index which is the sum of scores of cookies, beacons and flash cookies.

If focused on a specific company it will give a detailed description and score of the company which is calculated based on  : whether the site belongs to an industry self-regulating group; whether it lets users opt out of receiving cookies; whether it is part of an advertising or tracking network; whether it shares data it collects with others; whether it promises to keep user data anonymous; how long it retains user data; and how it handles sensitive data such as financial or health information. The example on Dictionary.com says that it has more number of trackers. Click Here

This spying can be very useful if used for good reasons such as to improve our browser experience. It will result in a better consumer environment. Since this visualization gives a detailed idea about how exposed we are we can opt out or opt in for this tracking according to the privacy we wish to achieve.

The methodology followed to generate this visualization by Ashkan Soltani   technology consultant who was hired by  Wall Street Journal is an interesting read.

Viz: Normalized Domestic Debt Rating

This visualization can be viewed here: Debt Ratings
This is a visualization of Standard &Poor's debt ratings before the debt rating of the United States was cut to AA+ on Aug 5, 2011. This visualization uses Standard & Poor’s as the data source and a World Map to represent the debt ratings for countries.

Below is a snapshot of the visualization.


This accommodates 6 different types of ratings which can be changed through the combo box.

Different color schemes are used to represent the ratings. Similar kind of coloring scheme is used for ‘Normalized ratings’ and a different coloring scheme is used for all other kinds of ratings. All the countries which do not appear in any of the ratings are lightly shaded.

One of the highlights of this visualization is that it allows comparison of two or all types of ratings in one go. For this, the user can select the ‘#of maps’ option embedded in the visualization toolbar.

Another useful feature is that the color scheme can be replaced by bubbles through the circular push buttons labeled ‘Colors or Bubbles?’. The relative area of the bubble will help to determine the debt extent for the country it represents. Additionally, tooltips are used to indicate country name and its rating value. It also provides zooming in and out and scrollbars to navigate across the map.

The user can select a country on the map for specific details and then deselect it to view the global ratings. Selecting a particular range or rating on the color legend will highlight all the respective countries.

We have seen this kind of visualization where a global map is used. What stands out in this visualization is the little widgets available for changing coloring schemes, zooming which add on to the interactivity.

Improvements:

1. Identify mouse events on the global map.

2. Help to visualize special regions like Europe in a better way than zooming.

References:


Viz : Akamai's Real-Time Web Monitor

With this monitoring tool, Akamai generates real-time visualizations of the world's internet traffic and identifies regions prone to attacks, regions having very high volume of traffic and regions experiencing latency.Each region could either be a country or a state. By monitoring these three parameters continuously,the quality of service can be maintained.

As mentioned by Akamai in the 'Methodology and Data Collection' section , the attack traffic is identified by collecting the source and destination ID addresses along with the number of attempted connections. The packets generated by malicious trojans and worms are traced and the number of attacks per day are measured. The amount of network traffic is analysed by the volume of data being requested and delivered.Connection speeds are measured by running automated tests and take into account the number of downloads, ICMP pings and number of existing connections. Real-time Web Monitor displays two kinds of latency: Absolute latency which is the measurement of all networks in the area and Relative latency which is the differential between the region's current latency and its average latency.

The regions with highest volume of attacks or traffic are called out in the visualization with either orange or pink depending on which mode the tool is running in. Absolute latency is represented in blue and relative latency is represented in green.

I found these visualizations very helpful and intuitive because at any given time, the current top ten worst performing cities or top ten cities with the highest traffic volume or the number of attacks in the last 24 hours in any region can be identified. Based on the results,appropriate measures can be taken to balance the network load or to defend against security attacks.

The visualizations can be found here.

Viz: GE Health InfoScape

Health InfoScape is a tool from GE which provides the user with relationships between different health ailments. It is called as the disease network. This disease network represents the most common ailments found in the people in America today. The data represented in this visualization are based on about 7.2 million electronic anonymized patient records which are owned by GE.



The Infoscape can be viewed in two forms. One is the traditional graph-network format in which each node(ailment) is connected to another ailment if they are related to each other. The second is the circle format in which all the ailments are arranged in a circle and relationships are shown with the help of connections between the circle elements. The disease network can be dynamically organised by gender and is different for males and females. It uses a term called 'Prevalence' for each ailment which is the percentage value of how common that ailment occurs in males / females. More the prevalence value, more bigger is the node size. One click on any node reveals the category, prevalence level and most commonly co-related ailments. The visualization allows the user to specifically search for an ailment or select any category from the list of categories given. Based on the prevalence level, any user can easily find out what exactly might the ailment mean or be related to which ailment or which symptom might further show up.

As an user, this disease network is a handy tool to figure out the relations between different symptoms and possibly also come up with a home diagnosis for immediate medication. The interactive depcition of relations between symptoms/ailments and their prevalance level can help the user to further narrow down the diagnosis. Another point to consider is that as it is derived from historical data for millions of patients, the accuracy of the data; even though not 100%; is high. Having said that, this visualization is helpful for taking care of ourselves in every way but it is always safe to visit a doctor.

Health InfoScape is available online and for download here.
Some other cool visualization worth trying out are LivePlasma and British History Timeline.

Viz: TouchGraph Amazon Browser

This classic graph visualization lets the user search for books, movies,electronics with a slight twist. The "TouchGraph Amazon Browser" enhances the browsing experience by allowing the user to explore connections between related items to the searched term. The search can be made in different categories like books, music or movies. Once the searcg graph is displayed, every individual item can be clicked on to to get detailed product information along with a link to the standard Amazon page for the product.

Going into the tecnicalities of the visuzlization, the TouchGraph search relies on the cluster methodology to fully represent the searched item, related item, recommended items by taking into account the sales rank of each item as calculated by Amazon.

Concept of Clusters: As mentioned in the "Help" section of the TouchGraph Amazon Browser, the clusters are formed for similar items to the searched item which are a part of Amazon's recommended list. The size of the bubble for the image is based on the sales rank as calculated by Amazon. The exact algorithm for calculating sales rank is given in the "Help" section.The cluster toolbar is provided to allow the user to change the number of clusters,colors given to the clusters.

I found this visualization very intuitive and rather interesting to use than the normal website method. The visualization of the clusters according to common subjects, recommendations, sales rank gives all related information about a product at a glance which makes it a very effective method.

Try a demo of this at http://www.touchgraph.com/amazon

Apart from Amazon, TouchGraph also has similar interactive visualizations for FaceBook and SEO.

Viz: Newsmap - A visualization for the 'Google News' news aggregator



Newsmap is a treemap visualization for displaying the real-time information gathered by the Google News news aggregator. Its objective is to divide and present information in clusters, which makes it easier to know about the patterns in news reporting across the globe.

In Google news, information is grouped into clusters, based on similarity in content. Accordingly, in Newsmap, the size of each cell is proportional to the number of related news articles that exist inside each cluster in the Google News Aggregator. This helps the users to find out the most important news articles, by country/time/topic, etc. Also, it is easier to identify which countries give importance to which kind of news, like business, tech, sports, etc.

This is a nice technique for aggregating the information overload present on the Internet and updated per second, allowing news readers to sift through the most happening events at that moment.

This visualization is present at http://www.newsmap.jp/

Saturday, August 20, 2011

Visualization: Where Does Mobile Malware Come From & How Do You Protect Yourself? [((tags: visualizations, mobile, security)(


Where Does Mobile Malware Come From & How Do You Protect Yourself? [Infographic]

Red Android 150x150.jpgWhat is mobile malware? Where does it come from? How does it get into your phone? These questions are just beginning to surface in the public mindset as splashy headlines warn smartphone users of the dangers lurking to take over their shiny, new mobile device. Security company BullGuard came up with a very informative infographic that shows where mobile malware comes from and how it spreads. Mobile malware does not come from malevolent a cupid shooting poison arrows into users' phones. Like PC viruses, malicious mobile programs are perpetrated by people that control botnets and want the information stored in your smartphone for their own means.


Sponsor

Mobile malware can come from just about any vector you could think of. It lurks in application stores (especially third-party stores), text messages, emails, websites, search results and images. Some malware can snoop your device if you are on an insecure public Wi-Fi channel. Take a look at the infographic below and let us know what steps you take to protect your smartphone from those who would do it harm.

Several months ago we posted an infrographic titled "Where Does Your Malware Come From?" Just as with that infographic, we made sure to fact check this infographic (you would be surprised how much false, old or half-information these infographic makers try to slip by busy tech reporters) and the numbers check out. We have written about much of what is in the infographic over the last several months. See the very bottom of the image for BullGuard's sources, though note that no other security company is listed as a source of the information in the image. It looks like BullGuard has superseded some of the research of its competitors while still remaining technically accurate by sourcing it to various research organizations and institutions such as Juniper and the University of Virginia.

Check it out and tell us what you think:

State-of-Mobile-Malware.jpg

Visualization: Who Uses Google Plus Now? Yep, Male Students & Geeks From the US

Visualization: Who is Suing Whom In the Mobile Patent Wars

All vs all. Kinda silly really. 

Chart of the Day: Who is Suing Whom In the Mobile Patent Wars?

Patents are all the rage right now. More precisely, applying for, purchasing and suing the nearest competitor over patents is causing a craze in the mobile business environment. Did Google ever actually want the Nortel patents? Or did they just bet crazy sums (like Pi, the distance from the sun, etc.) because they knew they were going to acquire Motorola and its patent portfolio anyway? Next on line are the InterDigital patents, which are supposedly more in-depth and numerous than the Nortel or Novell patents. Some say we are in serious need of patent reform because the current ecosystem has become anti-innovation and toxic.

Thomson Reuters came out with a great chart yesterday that shows the current legal battleground for mobile patents. It is interesting to note who is getting sued and who is doing the suing. For instance, as much legal hot water that Google has been in, they are technically only being sued by Oracle over Java in the mobile realm. Microsoft has multiple suits going against Barnes & Noble, Foxconn (Apple's primary factory where iOS devices are made), Motorola and Inventec. Yet, Apple takes the crown. It is being sued, is suing, or has settled suits with five different corporations.


Sponsor

Apple is being sued by Kodak and has settled a suit and countersuit with Nokia. Yet, Apple is in a suit and countersuit situation with most of the major Android OEMs (except, oddly, LG) - HTC, Samsung and Motorola. Samsung is having a devil of a time trying to keep its Galaxy Tab on store shelves across the world, with injunctions being filed in Australia and in the European Union, specifically by Germany and the Netherlands, both of whom want to keep all Galaxy devices off the shelves.

Microsoft is licensing patents to both HTC and Amazon (it worth noting that the Amazon vs. Apple legal battles do not involve actual patents and hence are not on this chart). The only entity on this list that appears to have escaped the patent wars is Qualcomm, which has already settled a suit and countersuit with Nokia. Qualcomm is a dark horse in this ecosystem because their chips power millions of devices and its owns (or owned) thousands of patents as well as a chunk of the wireless spectrum. They are, as they say, the straw that stirs the drink.

Who is missing off this list? Intel probably has some legal issues over patents, but not related to mobile. IBM and Cisco surely fall in here somewhere.

Take a look at the chart below. Outside of the nature of patent...

Sent from my iPhone

Visualization: How Humanity Created So Much Data & Computable Knowledge


How Humanity Created So Much Data & Computable Knowledge (Infographic)

Steven Wolfram and team have gathered together a big timeline of key events in the history of systematic data and computable knowledge. The team has created a beautiful infographic and a five foot long poster available for mail order (I just bought one, $15 with shipping) in anticipation of the Wolfram Data Summit in DC early next month. We're really at the dawn of a whole new age of data creation, so this timeline will likely look like pre-history relatively soon, but it's fascinating and important none the less.

"[When] I first looked at the completed timeline," Wolfram writes, "the first thing that struck me was how much two entities stood out in their contributions: ancient Babylon, and the United States government... [It] is sobering to see how long the road to where we are today has been. But it is exciting to see how much further modern technology has already made it possible for us to go."


Sponsor

datatimeline-1.jpg

Above: Click to view full timeline.

Wolfram argues that Artificial Intelligence has languished over the years, but that the body of data that's become available for computation has exploded. "[We] can just start from the whole corpus of systematic knowledge and data - as well as methods and models and algorithms - that our civilization has accumulated, poured wholesale into our computational system... this is what we have done with Wolfram|Alpha: in effect making immediate direct use of the whole rich history portrayed in the timeline."

We've written here for several years about the explosion of data production that's beginning and will be a major factor in determining the nature of human civilization in the near-term. In terms of sheer quantity, far more will be made measurable in the next few years than has been instrumented by any of the other developments on Wolfram's timeline. Google's Marissa Mayer calls the coming Internet of Things "bigger than Moore's law." Former HP CEO Mark Hurd said in 2009: "more data will be created in the next four years than in the history of the planet...

Sent from my iPhone

Thursday, August 18, 2011

Wednesday, August 17, 2011

Monday, August 15, 2011

Tool: Apple’s Brazen Revenue Grab Boosts Open Web Apps

Apples "give us a cut" subscription policy is probably giving html5 -- for mobiles -- a boost. 

Apple’s Brazen Revenue Grab Boosts Open Web Apps