Skip to main content

Classification and Clustering

In order to write a tutorial about classification, it was necessary to find an example that was broad enough that it would need to be sub-divided. Since I actually care about whether you remember this stuff, it needed to be something that a lot of people like and would relate to. And since I have a lot of international subscribers, it needed to be cross-cultural as well. So what is universal, cross-cultural, and dearly loved?
Beer.
Beer. Heck yeah.
There’s American beer, Mexican beer, German beer, Belgian beer….hell, even the Japanese make beer. There’s IPA, Lager, Pilsner. Dark, light, stout. There are so many ways to classify beer that we could spend weeks doing it (so naturally, I did).
Now, before you can classify anything you have to determine the characteristics that you’re going to use. For beer you could use country of origin, color, alcohol content, type of hops, type of yeast, and calorie count among other things. That way you could sort based on any of those characteristics to judge similarities between the various brews.
And just like that, you’ve done classification. Simple, right?
To take the example further, let’s assume that my favorite beer is Sweetwater “Take Two” (a pilsner made here in Atlanta) but I’m in Santiago, Chile this week for a conference.  The Chileans are a lovely people, but the management at my hotel doesn’t know about the wonderful goodness made by Sweetwater and they don’t have it at the lobby bar. I explain my predicament (read: “impending crisis”) to the bartender. What would a good bartender do?
If he’s been in the business for any length of time, he’s already gone through the classification step for beers but probably didn’t realize it. He has them sorted by characteristics in his head. He starts asking me questions about Take Two: “How dark is it?”, “How ‘hoppy’ does it taste?”, and “How many can you drink before passing out?”. Based on my answers he knows that what I’m describing is basically a golden-blonde pilsner with spicy hops and an earthy tone.
He might also figure out that I’ll need help back to my room at the end of the night because of all this “field research”.
So now that he has the characteristics of my favorite brew figured out, he compares that against the beers he knows. The ones with the most matching criteria form a “cluster” that he can make recommendations (and hopefully free samples) from. My night is saved, and his tip is big. Everyone is happy.
And just like that, you understand clustering.
How does this apply to the business world? There are many potential applications of classification and clustering, but a common one is identifying the characteristics of a company’s best customers and then searching a pool of potential customers for ones that meet those characteristics. If your best customers have between 1000-2500 employees, are in the manufacturing and retail verticals, and are located in the New England area of the US, that’s good information to know.
What applications can you think of?

Comments

Popular posts from this blog

Python and Parquet Performance

In Pandas, PyArrow, fastparquet, AWS Data Wrangler, PySpark and Dask. This post outlines how to use all common Python libraries to read and write Parquet format while taking advantage of  columnar storage ,  columnar compression  and  data partitioning . Used together, these three optimizations can dramatically accelerate I/O for your Python applications compared to CSV, JSON, HDF or other row-based formats. Parquet makes applications possible that are simply impossible using a text format like JSON or CSV. Introduction I have recently gotten more familiar with how to work with  Parquet  datasets across the six major tools used to read and write from Parquet in the Python ecosystem:  Pandas ,  PyArrow ,  fastparquet ,  AWS Data Wrangler ,  PySpark  and  Dask . My work of late in algorithmic trading involves switching between these tools a lot and as I said I often mix up the APIs. I use Pandas and PyArrow for in-RAM comput...

Kubernetes Configuration Provider to load data from Secrets and Config Maps

Using Kubernetes Configuration Provider to load data from Secrets and Config Maps When running Apache Kafka on Kubernetes, you will sooner or later probably need to use Config Maps or Secrets. Either to store something in them, or load them into your Kafka configuration. That is true regardless of whether you use Strimzi to manage your Apache Kafka cluster or something else. Kubernetes has its own way of using Secrets and Config Maps from Pods. But they might not be always sufficient. That is why in Strimzi, we created Kubernetes Configuration Provider for Apache Kafka which we will introduce in this blog post. Usually, when you need to use data from a Config Map or Secret in your Pod, you will either mount it as volume or map it to an environment variable. Both methods are configured in the spec section or the Pod resource or in the spec.template.spec section when using higher level resources such as Deployments or StatefulSets. When mounted as a volume, the contents of the Secr...

Andriod Bug

A bug that steals cash by racking up charges from sending premium rate text messages has been found in Google Play.  Security researchers have identified 32 apps on Google Play that harbour the bug called BadNews. A security firm Lookout, which uncovered BadNews, said that the malicious program lays dormant on handsets for weeks to escape detection.  The malware targeted Android owners in Russia, Ukraine, Belarus and other countries in eastern Europe. 32 apps were available through four separate developer accounts on Google Play. Google has now suspended those accounts and it has pulled all the affected apps from Google Play, it added. Half of the 32 apps seeded with BadNews are Russian and the version of AlphaSMS it installed is tuned to use premium rate numbers in Russia, Ukraine, Belarus, Armenia and Kazakhstan.