5 Comments
User's avatar
Anthony Chung's avatar

I often think about Q. what is the complementary pairing to AI cloud genies? Asking. What is the one thing the genie doesn’t know? Aladdin’s specific 3 wishes. Their idiosyncratic context. One is in cloud all powerful. The other on the ground. Limited. But specific.

Old stories. Same pattern insights. Not timeless. But always timely.

The genie in the cloud who can answer anything always needs a connection to Aladdin on the ground who will ask for only one things. Its the one thing the genie doesn’t know. Because Aladdin could one specific narrow permutation from any combo in the universe. The value pairing is not power. But relational proximity to problem space.

Expand full comment
Anthony Chung's avatar

Genie. Can you make me a prince?

There is a lot of grey area in Make me a prince.

Variations. Prince as in Purple rain? Prince charles. Now king.

Both Generative AI and Genies.

All powerful. Still dealing with same restrictions.

My fav. Disney’s original Aladdin and Genie, still laugh hearing Robin Williams.

Come closer. Closer. Too close!

Expand full comment
Anthony Chung's avatar

Early morning typos. It’s not its. One thing. Not One things.

Expand full comment
Anthony Chung's avatar

This does see the pattern of what people are offering. Offering up our workflow. Offering up our secret sauce.

These are company secrets that consulting groups protect. Their moat. And their deeper value. Happy to answer questions. But not necessarily happy to offer the consulting template of questions. Or their workflow.

This looks to be the same reasons why consulting groups don’t share. Is that it does help differentiate, keep their competitive advantage, value their accumulating lived experience, maintain a moat against others and not feed their competitors flywheel.

I think your speculation about how larger groups like openai can learn from the questions and exposed process has insight to see what is valuable for their flywheel towards agi and greater application. Not because they can go wider. But because they can narrow down better. The vector points are currently too wide for how narrower applications are made for specify types of workflows.

If this is the flywheel for larger groups then it does then feedback into what are moats and differentiating advantages. Using medieval pictures. We can ask of our moats are they really streams that feed the waterwheels of the kingdom downstream?

Expand full comment
Brent Maxwell's avatar

One thought I have about point #2 is that I think it'll play out in different ways. One one hand, vertical players will attack some of the problems that can be solved with LLMs, but not all the vertical players will be able to develop BEST IN CLASS features that solve their customer needs. This still leaves a big market gap!

And yeah, the Yann Lecun podcast with Lex was great. LLMs are not the path to AGI! My version of his argument is: language is a compression algorithm for ideas, and ideas are compression algorithms for information. This means that by definition many orders of magnitude of knowledge isn't able to be captured by language, and therefore cannot be conceptualised by LLMS.

Yann's example of a 3 year olds visual processing alone in data volume vs a language subset was so interesting!

Expand full comment