Hacker Newsnew | past | comments | ask | show | jobs | submit | leourbina's commentslogin

Came to say exactly the same thing :)

This post follows the general, highly academic/dogmatic, tone that I’ve seen when certain folks talk about REST. Most of the article talks about what _not_ to do, and has very little details on how to actually do it.

The idea of having client/server decoupled via a REST api that is itself discoverable, and that allows independent deployment, seems like a great advantage.

However, the article lacks even the simplest example of an api done the “wrong” vs the “right” way. Say I have a TODO api, how do I make it so that it uses HATEOAS (also who’s coming up with these acronyms…smh)?

Overall the article comes across more as academic pontification on “what not to do” instead of actionable advice.


Agreed. I wish there was some examples to better understand what the author means. Like, in a web app, do i have any prior knowledge about the "_links" actions? Do I know that the server is going to return the actions "self" and "activate"? Is the idea to hide the routes from the user until the api call, but he should know that the api could return actions like "self", "activate" or "deactivate"? How do you communicate that an action requires a specific body? For example, the call activate is done in POST and expect a json body with a date inside. How do you tell that to the user?


> However, the article lacks even the simplest example of an api done the “wrong” vs the “right” way.

Unless the design and requirements are unusually complex or extreme, all styles of API and front end work well enough. Any example would have to be lengthy, to provide context for the advantages of "true" ReST architecture, and contrived.


I’m a MIT grad from ‘12. PM me (email is on my profile)


Anyone can say anything on YouTube. I think you’re confusing confirmation bias with “citing your sources”.


you didn't watch it either, huh? Nice.

also what bias? I didn't make the stats up. Statements of fact cannot be insolent. If you disagree with the facts, then let's talk about that, and look for more data. However, what has happened in this thread is thinking i have an agenda, other than i think people need to hear what they were talking about - in that clip

The mentality that "it was real close to 50-50 and trump didn't get a majority of the vote and therefore we can keep on doing what we're doing" is what the video clip was talking to. It didn't work, and the tactics need to change if anyone wants to see change.

I don't care that people want to argue with me personally. But it's not doing themselves any favors, as it pertains to getting people elected they want to see in office.


UUIDs are very wasteful [1]. For most use cases you can replace them with much shorter strings and still have very low chances of collisions [2]

[1] https://henvic.dev/posts/uuid/

[2] https://alex7kom.github.io/nano-nanoid-cc/


Call me crazy, but I'm simply splitting my UUID into the higher and lower bits and indexing off that.

IE

    CREATE TABLE foo(
        id_ms    UNSIGNED BIG INT NOT NULL,
        id_ls    UNSIGNED BIG INT NOT NULL,
        PRIMARY KEY (id_ms, id_ls)
    ) WITHOUT ROWID;
That works well with UUIDv7 and is just storing 128bits rather than a full string. In most languages it's pretty trivial to turn 2 longs into a UUID and vice versa.


Is there any advantage to this approach over Postgres native uuid support which should store the same number of bits?


No. This approach is strictly for DBS like sqlite without uuid or 128bit integer support.


Sure, at cost of increased complexity of access. Sometimes the waste is worth the simplicity.


Sounds complex, just use a UUID. If that’s the dominating factor for storage, then you have a different problem to solve.


Just fyi, the person that replied wasn't just "someone" at Cloudflare. It was Kenton Varda (kentonv here). He's the creator of Cloudflare workers, he's an incredible engineer.


On the other hand, everyone is just someone, somewhere. Let's not criticise people for not giving enough of an intro to people.


Yes, of course! Sorry didn't mean to make it seem insignificant – was quite excited to see him show up there from a personal fanboy perspective.


If you can run this using ollama, then you should be able to use https://www.continue.dev/ with both IntelliJ and VSCode. Haven’t tried this model yet - but overall this plugin works well.


They say no llama.cpp support yet, so no ollama yet (which uses llama.cpp)


Correct. The only back-end that Ollama uses is llama.cpp, and llama.cpp does not yet have Mamba2 support. The issues to track Mamba2 and Codestral Mamba support are here:

https://github.com/ggerganov/llama.cpp/issues/8519

https://github.com/ggerganov/llama.cpp/issues/7727

Mamba support was added in March of this year:

https://github.com/ggerganov/llama.cpp/pull/5328

I have not yet seen a PR to address Mamba2.



They meant that there is no support for Codestral Mamba for llama.cpp yet.


Unrelated, all my devices freeze when accessing this page, desktop Firefox and Chrome, mobile Firefox and Brave. Is this the best alternative to access code ai helpers besides the GitHub Copilot and Google Gemini on VSCode?


I've been using it for a few months (with Starcoder 2 for code, and GPT-4o for chat). I find the code completion actually better than Github Copilot.

My main complain is that the chat sometimes fails to correctly render some GPT-4o output (e.g. LaTeX expressions), but it's mostly fixed with a custom system prompt. It also significantly reduces the battery life of my Macbook M1, but that's expected.


I'm quite happy with Cody from Sourcegraph https://marketplace.visualstudio.com/items?itemName=sourcegr...


I've been using co-pilot as well as Claude (Sonnet 3-3.5) and ChatGPT (4-4o) since the beginning of the year with somewhat mixed success. My general take is that, at least so far, the improvement is very much marginal. These tools save me a few seconds by quickly surfacing information that otherwise would require me having to google and search through docs/stackoverflow/etc, or by writing code that's generally tedious (like tests).

Co-pilot is mostly useful to stay in the zone, allowing me to focus on a larger task and letting it get some of the details right (which it does ~90% of the time).

On the other hand, I use ChatGPT/Claude for more open ended tasks (e.g. "I got this <insert obscure> error", "how do I configure this framework so that xxx") which previously I would have googled hoping to find a stackoverflow answer or a doc page somewhere. For this use case I'd say it's ~50% successful, but I often have to deal with hallucinations - some times just following up with "Are you sure?" does help, but it's hit/miss.

As I said a the beginning, mostly marginal improvement. It's definitely saved me time, but thus far nothing that I couldn't do myself by spending a little bit more time. Largely it is a nice to have, not a need to have.


I’d take a slightly different perspective. I’d like to think that these tools are in some ways “humanizing” - we can offload things that we’re not particularly good at (like memorization tasks) and instead we can use our capacities to do things that (at least for now) we humans are uniquely capable of doing. As an example: Back in the 90’s people knew many phone numbers by heart, nowadays I don’t think people know more than a handful, if that. Does that mean that “phone contacts” are making us dumber? Or perhaps we can use the time/effort/capacity to better use.


I do agree that we might want to offload mundane and boring / repetitive tasks which do not add value to our lives. But this 'value' is a personal and subjective thing, so it's hard to give a recipe there which will fit everyone.

Hence I think its important for people to be educated in how to maintain this balance of easy-life vs. hard-life to optimise for their own life what they will get from it.

I for example do not use intellisense nor auto-correct. (i do make a lot of spelling mistakes!). I want to learn to program, so intellisense will break that learning. I do not want to 'produce programs' - for which intellisense is _super good_ as it will increase productivity a lot. I know most people make the tradeoff the other way, as they prefer productivity over learning.


> to better use

I'm still waiting for this to take effect.


Paperspace is a great way to go for this. You can start by just using their notebook product (similar to Collab), and you get to pick which type of machine/GPU it runs on. Once you have the code you want to run, you can rent machines on demand:

https://www.paperspace.com/notebooks


I used paperspace for a while. Pretty cheap for mid tier gpu access (A6000 for example). There were a few things that annoyed me though. For one, I couldn’t access free GPUs with my team account. So I ended up quitting and buying a 4090 lol.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: