Generative AI: easy adoption, continuously improving, protecting content, hyperscale efficiency

Four extracts from my LinkedIn

Lak Lakshmanan
3 min readAug 31, 2023

If you follow me on LinkedIn, you probably have seen these links and posts already. I’m reposting them here more as a memory aid than anything else.

1. Adopting generative AI is getting easier

Article: Get a Second Brain with Quivr.

My take:

Adopting Gen AI is getting easier

New improvements in Gen AI very quickly become available as OSS. The moat will be on recognizing where to apply these new technologies. My post suggests how to look for adoption opportunities for four of these technologies.

If you are a product leader, look for opportunities to infuse these Gen AI capabilities into your products and services: provide a query-based UI (to take advantage of prompts), automate parts of workflows (employing instruction fine-tuning), simplify user workflows (to leverage agents), or provide instantaneous access to state/knowledge (to take advantage of RAG).

2. Continuously improving LLMs

Article: Tweet by Yann LeCun (of NYU and Meta):

My take:

Continuously improving LLMs

Suppose you have a v1 LLM. You get more data (e.g. articles on the Hawaii fires), but they are all unlabeled. Use the v1 LLM to create instructions (“what role did invasive plants play in the Maui fire?”) for which the answer is the new articles you just got. Augment the training dataset with these instructions + article, and retrain. This is what Yann LeCun means by “self-augment”. The other self-alignment points that he makes involve using the LLM in similar ways.

Foundational LLMs can be used to create datasets, evaluate responses, and fine-tune for new tasks. Therefore, the human costs associated with labeling etc. will come down. Potentially, there are many more use cases that will open up. Also, once you create a solution, you have a path to using the LLM itself to continuously improve it.

3. It’s possible to have a granular approach to protect content from LLMs

Article: Robots.txt is not the answer:

My take:

It’s possible to have a granular approach to protect content from LLMs

Consider using copyright tags to protect your content. For example, if you mark your content as CC-BY-SA, then anybody can use a snippet of your content as long as they link to it (that’s the attribution). However, a LLM can’t simply regurgitate your text without attribution. Of course, this relies on the major LLM providers to honor such a granular approach. Today, they are giving you an all-or-none approach with robots.txt. There is no technology reason that we can’t use copyright licenses instead of robots.txt.

4. Why a hyperscaler is more efficient at running fine-tuned LLMs

Article: Adaptation of Large Foundation Models:

My take:

Why a hyperscaler is more efficient at running fine-tuned LLMs

Hyperscalers have a way to spread out the costs of running LLMs across many customers even if the customers all run different fine-tuned models as long as the underlying foundational model is the same. Unless you have a strong reason to run a fine-tuned model on premises (e.g., regulatory compliance) or to use a bespoke fine-tuned model (e.g., one trained on industry jargon), it will likely be more cost-effective to serve your fine-tuned models using a managed service provider.


If you are not that familiar with Generative AI, my March article on the four approaches to build on top of foundational models remains quite valid. There is now a fifth — agents, which involves using an LLM to create the parameters to external APIs — that are now widely employed (within guardrails, of course).



Lak Lakshmanan

articles are personal observations and not investment advice.