Productivity
fromTNW | Artificial-Intelligence
4 days agoWhy probability, not averages, is reshaping AI decision-making
ChanceOmeters measure uncertainty directly, improving decision-making by providing odds rather than relying solely on averages.
Sometimes the reason pi shows up in randomly generated values is obvious—if there are circles or angles involved, pi is your guy. But sometimes the circle is cleverly hidden, and sometimes the reason pi pops up is a mathematical mystery!
Weather impacts sales. Every retailer knows it. But for most, the likelihood that it might rain, snow, or sleet on the third of March somewhere in the Midwest is rarely used. Vendors such as Weather Trends have offered accurate, long-range forecasts for more than 20 years. But the opportunity is not predicting the weather; it's knowing what to do with the data. AI might change that.
Imagine you're selecting an influencer to work with on your new campaign. You've narrowed it down to two, both in the right area, both creating the right sort of content. One has 24.6 million subscribers, the other 1.4 million. Which do you choose? Now imagine you could find out the first had 8.7 million unique viewers last month, while the second had 9.9 million. Do you want to change your mind?
The NFL is no stranger to innovation. Over the years, teams have adopted new strategies, technologies, and data-driven approaches to stay ahead of the competition. One of the most significant advancements in recent years is the rise of sophisticated analytics and modeling. These tools have become essential for teams seeking to improve player performance, game strategy, and overall team development.
Which Algorithm Is This? If you step back, this maps almost perfectly to the Top K Frequent Elements problem.We usually solve it for integers in a list. Here, the "elements" are audience profiles age and body-type combinations. First, define what an audience profile looks like: case class Profile(age: Int, height: Int, weight: Int) What we want is a function like this:
A traveler might search for a weekend getaway and still see travel ads weeks later, long after returning home. The data was right. The timing wasn't.AI-driven marketing has the potential to close that gap - but only if it understands context. Personalization built solely on identity or past behavior can reveal who someone is, but not when or why they're ready to act.As AI takes center stage in marketing strategy, context is emerging as the differentiator that turns reactive automation into predictive intelligence.
When discussing their results, they tell us that Facebook's reporting or Google Analytics show the ad campaigns as barely breaking even. Yet they keep investing in this channel. They reason that Facebook can only see a fraction of the sales, so if Facebook is reporting a 1x return on ad spend (ROAS) then it's probably at least 2x in reality.
When it comes to working with data in a tabular form, most people reach for a spreadsheet. That's not a bad choice: Microsoft Excel and similar programs are familiar and loaded with functionality for massaging tables of data. But what if you want more control, precision, and power than Excel alone delivers? In that case, the open source Pandas library for Python might be what you are looking for.
The title "data scientist" is quietly disappearing from job postings, internal org charts, and LinkedIn headlines. In its place, roles like "AI engineer," "applied AI engineer," and "machine learning engineer" are becoming the norm. This Data Scientist vs AI Engineer shift raises an important question for practitioners and leaders alike: what actually changes when a data scientist becomes an AI engineer, and what stays the same? More importantly, what skills matter if you want to make this transition intentionally rather than by accident?
Asif on the other hand, is doing something else: He's doing things the Pybites way! He's building with a focus on providing value. We spent a lot of time discussing a problem I'm seeing quite often now: developers who limit themselves with AI. That is, they learn how to make an API call to OpenAI and call it a day. But as Asif pointed out during the show, that's not engineering. That's just wrapping a product.
Most beginner data portfolios look similar. They include: A few cleaned datasets Some charts or dashboards A notebook with code and commentary Again, nothing here is wrong. But hiring teams don't review portfolios to check whether you can follow instructions. They review them to see whether you can think like a data analyst. When projects feel generic, reviewers are left guessing:
Every year, poor communication and siloed data bleed companies of productivity and profit. Research shows U.S. businesses lose up to $1.2 trillion annually to ineffective communication, that's about $12,506 per employee per year. This stems from breakdowns that waste an average of 7.47 hours per employee each week on miscommunications. The damage isn't only interpersonal; it's structural. Disconnected and fragmented data systems mean that employees spend around 12 hours per week just searching for information trapped in those silos.
SHAP for feature attribution SHAP quantifies each feature's contribution to a model prediction, enabling: LIME for local interpretability LIME builds simple local models around a prediction to show how small changes influence outcomes. It answers questions like: "Would correcting age change the anomaly score?" "Would adjusting the ZIP code affect classification?" Explainability makes AI-based data remediation acceptable in regulated industries.
What happens under the hood? How is the search engine able to take that simple query, look for images in the billions, trillions of images that are available online? How is it able to find this one or similar photos from all that? Usually, there is an embedding model that is doing this work behind the hood.