AidfulAI newsletter covering Snipd's new AI Notes feature. The feature provides you with personalized notes from the podcast you listened to, simply by tapping your headphones every time you hear something interesting.
Why should there even be something like cash back or points programs? These programs are taking money from the merchant and giving it to the end-consumer, while the middle-men keep a cut.
I believe a world without these programs (or as you said “watered-down” versions) is more fair for the merchant and others shoppers not using such cards.
Especially if the merchant is not allowed to charge the end-consumer with this added fee.
The programs don’t take money from the merchants - they raise their prices to cover the fees.
The reality is that they exist largely because the people earning the points aren’t paying the bills. Most CC Rewards programs target the professional class of travelers - people who travel for work where the employer pays the bill. It’s essentially taking from consulting companies’ clients to give to the employees.
I had similar thoughts about the general concept of using AI to automate AI Safety.
I really like their approach and I think it’s valuable. And in this particular case, they do have a way to score the explainer model.
And I think it could be very valuable for various AI Safety issues.
However, I don’t yet see how it can help with the potentially biggest danger where a super intelligent AGI is created that is not aligned with humans.
The newly created AGI might be 10x more intelligent than the explainer model. To such an extent that the explainer model is not capable of understanding any tactics deployed by the super intelligent AGI. The same way ants are most probably not capable of explaining the tactics delloyed by humans, even if we gave them a 100 years to figure it out.
As someone who has created several LLM-based applications running in production, my personal experience with langchain has been that it is too high of an abstraction for steps that in the end are actually fairly simple.
And as soon as you want to slightly modify something to better accomodate your use-case, you are trapped in layers & layers of Python boiler plate code and unnecessary abstractions.
Maybe our llm applications haven’t been complex enough to warrent the use of langchain, but if that’s the case, then I wonder how many of such complex applications actually exist today.
-> Anyways, I came away feeling quite let down by the hype.
For my own personal workflow, a more “hackable” architecture would be much more valuable. Totally fine if that means it’s less “general”.
As a comparison, I remember the early days of HugginfaceTransformers where they did not try to create a 100% high-level general abstraction on top of every conceivable Neural Network architecture. Instead, each model architecture was somewhat separate from one another, making it much easier to “hack” it.
Comparing Langchain to Hugging Face Transformers is apples and oranges. One is for research, one is for production. Production ML requires more abstraction, not less.
I disagree. Production systems don't need to be full of AbstractSingletonProxyFactoryBeans which is basically what LangChain is. For example, Linux certainly isn't.