Living in SoCal, I almost always prefer to order online. Most local businesses are losing to their e-commerce competitors; no wonder commercial spaces are empty.
I have a side business of a small e-commerce shop. I would consider having physical space just for the sake of luxury, but now I would rather spend that monthly rent on marketing online rather than paying for physical space.
IMHO, that's what is happening. Bank problems or anything else are secondary; if it were profitable to be at the physical location for the businesses, other factors would vanish.
The article applies to all kinds of loans for property though.
Apartment complexes could also be 50% vacant and still "worth" their original value if the asking rents remain high.
Office buildings that got cleared out after covid, same thing.
Brick and mortar retail are the same.
The article is more of a criticism of how asset values are calculated and loans are managed to avoid foreclosure. Which results in financially valid buildings/loans that are underutilized because the other option is creating economic equilibrium at the cost of lenders and debt holders.
Based on the stability track record, I was more curious about how SQLite has done the anomaly testing. Sadly, the article has just a few words about it.
Truly one of the best software products! It is used on every single device, and it is just pure rock-solid.
I would say the AI consumption aspect was a side effect: the primary goal was to "generate" new stuff. So far, to me, the significant boost is the coding aspect. Still, for the rest of the people, I think you are right: 90% of the benefits come from being an interactive, conversational search on top of the available information that AI can read/consume.
Firefox has been lagging in Web features for a long time. I have been a Zen browser user for about a year, and recently moved back to Arc just because almost all interactive websites look bad on the Firefox engine; somehow, they don't have the same level of JS API support as Chrome does, especially for WebRTC, Audio, or Video. And this is frustrating that they think the problem is the AdBlockers!
IMHO, this is not too bad! But obviously, coming from the software product industry, everyone knows that building features isn't the same as operating in practice and optimizing based on the use case, which takes a ton of time.
Waymo has a huge head start, and it is evident that the "fully autonomous" robotaxi date is far behind what Elon is saying publicly. They will do it, but it is not as close as the hype suggests.
Thanks for the context, I didn't realize the supervisor sits in the passenger seat in Austin. They do have a kill switch / emergency brake, though:
> For months, Tesla’s robotaxis in Austin and San Francisco have included safety monitors with access to a kill switch in case of emergency — a fallback that Waymo currently doesn’t need for its commercial robotaxi service. The safety monitor sits in the passenger seat in Austin and in the driver seat in San Francisco
Waymo absolutely has a remote kill switch and remote human monitors. If anything Tesla is being the responsible party here by having a real human in the car.
TBH, the idea seems way outdated for the current state of software engineering. The Rust compiler provides a massive benefit for AI Coding because it literally catches all the failure cases, so all AI have to do is implement the logical parts, which is usually a no-brainer for something like a Claude Code or Codex.
For example, the https://github.com/SaynaAI/sayna has been mostly Claude Code + me reviewing the stuff + some small manual touches if needed, but for the most part, I have found that Claude Code writes way more stable Rust code than JS.
It would be easier and safer to give the JS code to a translator and have it translate it into Rust, and then continue AI Dev with Rust, than to invest time in an automated compiler from JS to Rust. IMHO!
I’ve heard it said and I won’t argue your personal experience.
However, I don’t see it that way at all.
I find claude much more capable of writing large chunks of python or react/js frontend code than writing F#, a very statically type-checked language.
It’s fine, but a lot more hand-holding is needed, a lot more tar pits visited.
If anything, it seems a popularity contest of which language features the most in training data. If AI assistance is the goal, everyone should write Python and Javascript.
I’ve worked with relatively large projects in TypeScript, Python, C#, and Swift, and I’ve come to believe the more opinionated the language and framework, the better. C# .NET, despite being a monster, was a breath of fresh air after TS. Each iteration just worked. Each new feature simply gets implemented.
My experience also points to compiled languages that give immediate feedback on build. It’s nearly impossible to stop any AI agent from using 'as any' or 'as unknown as X'casts in TypeScript - LLMs will “fix” problems by sweeping them under the rug. The larger the codebase, the more review and supervision is required. TS codebase rots much faster then rust/C#/swift etc.
You can fix a lot of that with a strict tsconfig, Biome and a handful of claude.md rules, I’ve found. That said, it’s been ages since I wrote a line of C#, but it remains the most productive language I’ve used. My TypeScript productivity has only recently begun to approach it.
In my experience, "memory" is really not that helpful in most cases. For all of my projects, I keep the documentation files and feature specs up to date, so that LLMs are always aware of where to find what and which coding style guides the project is based on.
Maintaining the memory is a considerable burden, and make sure that simple "fix this linting" doesn't end up in the memory, as we always fix that type of issue in that particular way. That's also the major problem I have with ChatGPT's memory: it starts to respond from the perspective of "this is correct for this person".
I am curious who sees the benefits of the memory in coding? Is it like "learns how to code better" or it learns "how the project is structured". Either way, to me, this sounds like an easy project setup thing.
I think it cuts both ways - for example I've definitely had the experience where when typing into ChatGPT I know ahead of time that whatever "memory" they're storing and injecting is likely going to degrade my answer, so I hop over to incognito mode. I've also had the experience where I've had a loosely related follow-up question to something and I didn't want to dig through my chat history to find the exact convo, so it's nice to know that ChatGPT will probably pull the relevant details into context.
I think similar concepts apply to coding - in some cases, you have all the context you need up front (good coding practices help with this), but in many cases, there's a lot of "tribal knowledge" scattered across various repos that a human vet working in the org would certainly know, but an agent wouldn't (of course, there's somewhat of a circular argument here that if the agent eventually learned this tribal knowledge, it could just write it down into a CLAUDE.md file ;)). I think there's also a clear separation between procedural knowledge and learned preferences, the former is probably better represented as skills committed to a repo, vs I view the latter more as a "system prompt learning" problem.
ChatGPTs implementation of Memory is terrible. It quickly fills up with useless garbage and sometimes even plain incorrect statements, that are usually only relevant to one obscure conversation I had with it months ago.
A local, project specific llm.md is absolutely something I require though. Without that, language models kept on "fixing" random things in my code that it considered to be incorrect, despite comments on those lines literally telling it to NOT CHANGE THIS LINE OR THIS COMMENT.
My llm.md is structured like this:
- Instructions for the LLM on how to use it
- Examples of a bad and a good note
- LLM editable notes on quirks in the project
It helps a lot with making an LLM understand when things are unusual for a reason.
Besides that file, I wrap every prompt in a project specific intro and outro. I use these to take care of common undesirable LLM behavior, like removing my comments.
I also tell it to use a specific format on its own comments, so I can make it automatically clean those up on the next pass, which takes care of most of the aftercare.
I'm curious - how do you currently manage this `llm.md` in the tooling you use? E.g., do you symlink `AGENTS/CLAUDE.md` to `llm.md`? Also, is there any information you duplicate across your project-specific `llm.md` files that could potentially be shared globally?
I use a custom tool, that basically merges all my code into a single prompt. Most of my projects are relatively small, usually maxing out at 200k tokens, so I can just dump the whole thing into Gemini Pro for every feature set I am working on. It's a more manual way of working, but it ensures full control over the code changes.
For new projects I usually just copy the llm.md from the tool itself and strip out the custom part. I might add creating it as a feature of the tool in the future.
A few days ago I tried to use AntiGravity (on default settings) and that was an awful experience. Slow, pondering, continuously making dumb mistakes, only responding to feedback with code and it took at least 3 hours (and a lot of hand holding) to end up on a broken version of what I wanted.
I gave up, tried again using my own tool and was done in half an hour. Not sure if it will work as well for other people, but it definitely does for me.
I think the problem with ChatGPT / other RAG-based memory solutions is that it's not possible to collaborate with the agent on what it's memory should look like - so it makes sense that its much easier to just have a stateless system and message queue, to avoid mysterious pollution. But Letta's memory management is primarily text/files based so very transparent and controllable.
An example of how this kind of memory can help is learned skills https://www.letta.com/blog/skill-learning - if your agent takes the time to reflect/learn from experience and create a skill, that skills is much more effective at making it better next time than just putting the raw trajectory into context.
IMO context poisoning is only fatal when you can't see what's going on (eg black box memory systems like ChatGPT memory). The memory system used in the OP is fully white box - you can see every raw LLM request (and see exactly how the memory influenced the final prompt payload).
As a US person, I have lived in Finland for 3 years, and I can assure you that the Finns are the most content people you can imagine! They can go months without talking to anyone and still consider themselves "happy", but the correct word in English is "content".
That report is correct, it just they advertise with the wrong word in the headline, I guess because it is more click-bate title than having it as "The most content country"
As Finn I would agree. Finland is fine. Not the greatest and not happiest. But overall it is fine still. In most areas cost of living is pretty reasonable, services are sufficient. Police for example does good enough job. Probably could earn more money somewhere else, but why bother...
You don't see many cops in Finland. You just don't.
Firstly because the social benefits system keeps a lot of people out of trouble ' call it bribery if you like, but it meets basic needs. Secondly because there's a lot of private "security" types around - for example in the supermarkets, keeping out drunks and dealing with shoplifters - letting the police focus on the real stuff.
It's extremely important if you're interested in social stability. Unhappy people have a tendency to turn authoritarian and lash about, hurting both their own society and anyone who looks different.
I dunno, "discontent" is a pretty politically charged word, going back to Shakespeare - "Now is the winter of our discontent" from Richard III is referring to an attempted political overthrow.
It's referring to a successful political overthrow.
The quote really needs the first two lines:
Now is the winter of our discontent
made glorious summer by the sun of York.
The verb in the sentence is "is made", not just "is". "Now" it is summer, not winter. They were discontent in the past. Now they are happy.
York (Richard's brother, Edward, now King Edward IV) has overthrown King Henry VI. There's also an important pun: "York" also refers to their father, also named Richard, who was the Duke of York until his death at the hands of Henry's faction. So Edward is also the "son of York".
That said, Richard is being sarcastic. He's plotting the next political overthrow, which will also be successful. And who will in turn be overthrown again. That, at least, will put an end to it, if for no other reason than that literally everybody else is dead.
One of my most important jobs as a Shakespeare actor is to find ways to enunciate some of his over-long sentences in a way that allows the audience to follow them just by listening.
In this case, it's not too hard. Shakespeare likes giving you oppositions, like "winter" and "summer". Put the stress there, and the audience will follow. And you don't need to breathe at the end of the line; it can flow directly into the next one.
Not really. The US situation is engineered so only two parties ever get in, and are practically impossible to remove. Wait several years and the other lot will get in.
Even with Trump we see a lot of policies and directions that the Democrats have pursued previously.
Do you have a reference for that? The World Population Review [1] says that alcoholism rates are similar to or less than the US, Australia, Brazil. And definitely less than many other countries around the world.
These countries often have strict rules about alcohol which reflect this. In some parts you had to buy alcohol from a government store. Then there is usual tactic of taxing it to death. As a result illegal alcohol production is common out in the countryside.
reply