"How much variance in value created is explained by 'g'?"
Easier to point to spaces where value creation can be attributed easily (eg. trading). But you could imagine spaces that are 'g' loaded where attribution is quite messy (eg. entrepreneurship, diplomacy etc) but if you somehow ran it over 1000 trials, perhaps you'd find it's equally g loaded.
Admittedly, what constitutes 'g' should evolve as the economy starts rewarding skills that weren't rewarded before.
I’m more thinking about how LLMs help scale cognition. We can now apply much more thinking to all problems than we used to be able to. In particular, LLMs may be very useful in enabling opportunities with low Returns on Cognition (ie things that wouldn’t have made sense to do before because they required too much thinking to be worthwhile).
This is the cleanest, most minty-fresh rundown of how goods-and-service barter goes from monetary transaction to commodity exchange to pure finance.
One thing I think you miss is the _stakes_ of “getting the price right.” In my industry (electric utilities), even a municipal utility serving tens of thousands of customers burns through ~$100M/year. I saw an example recently of a guy getting a price so wrong it completely broke the business model of his organization.
And this is most industries! If your profit margin is under 5%, you have to _fight_ to stay in the black, and if some reedy nerd can get your prices right for “only” $500k/year, you pay up.
I especially like the Return on Cognition concept; would be interesting to try to measure this across industries.
If the question is:
"How much variance in value created is explained by 'g'?"
Easier to point to spaces where value creation can be attributed easily (eg. trading). But you could imagine spaces that are 'g' loaded where attribution is quite messy (eg. entrepreneurship, diplomacy etc) but if you somehow ran it over 1000 trials, perhaps you'd find it's equally g loaded.
Admittedly, what constitutes 'g' should evolve as the economy starts rewarding skills that weren't rewarded before.
I’m more thinking about how LLMs help scale cognition. We can now apply much more thinking to all problems than we used to be able to. In particular, LLMs may be very useful in enabling opportunities with low Returns on Cognition (ie things that wouldn’t have made sense to do before because they required too much thinking to be worthwhile).
On a related note, have you read this: https://www.darioamodei.com/essay/machines-of-loving-grace
This is the cleanest, most minty-fresh rundown of how goods-and-service barter goes from monetary transaction to commodity exchange to pure finance.
One thing I think you miss is the _stakes_ of “getting the price right.” In my industry (electric utilities), even a municipal utility serving tens of thousands of customers burns through ~$100M/year. I saw an example recently of a guy getting a price so wrong it completely broke the business model of his organization.
And this is most industries! If your profit margin is under 5%, you have to _fight_ to stay in the black, and if some reedy nerd can get your prices right for “only” $500k/year, you pay up.