Hacker Newsnew | past | comments | ask | show | jobs | submit | screcth's commentslogin

Many companies use RHEL Workstation to run proprietary GUI applications. The application usually runs on RHEL Servers and uses X11 forwarding to show the GUI on the Workstation.

Running the same OS on the client and on the server makes support much simpler. ISVs may not even support more modern OSs like Fedora or Ubuntu.

Those companies don't need an Office Suite as they have Windows machines that can run Microsoft Office. They just need a Linux desktop environment that is easy to use and stays out of the way when accessing the Workstation through VNC.


A very important part of people trusting you is them being able to understand what you say without making extra efforts compared to a native speaker.

An easy way to improve intonation and fluency is to imitate a native speaker. Copying things like the intervocalic T and D is a consequence of that. It would be easier for a native Spanish speaker to say the Spanish /t/ and /d/ but intonation and fluency would be impaired.

The sounds don't "flow" as they should.


> An easy way to improve intonation and fluency is to imitate a native speaker.

There are lots of variations in English pronunciation. Singaporean, Australian or Scottish native speakers do sound very differently. I don't know to what extent they benefit from adjusting their accent if working in a different English speaking country to match the local dialect.

Also, as a non-native speaker I wonder if it's worth practicing my accent considering that everybody has a different accent anyway. Rather than trying to mimic a north american accent (which I'll never be able to do anyway), I'd be more interested to identify and fix the major issues in my prononciation.


The specific problem is that American intervocalic /t/ and /d/ is very similar to Spanish /ɾ/. But if they don't get it right it's not perceived as the right phoneme. The Spanish /t/ is more dental and the undervocalic /d/ is more of a [ð] but they will sound correct in English.


If we consider writing out of bounds to be legal, we make it impossible to reason about the behavior of programs.


Hence why I (and many other compiler developers) are inherently skeptical whenever anyone says "just stop exploiting undefined behavior".


I'm not a compiler developer but I'm at least as skeptical as you because there is no sign that the "just stop exploiting UB" people actually want any specific semantics, IMO they want Do What I Mean, which isn't a realizable language feature.

If you could somehow "stop exploiting UB" they'd just be angry either that you're still exploiting an actual language requirement they don't like and so have decided ought to be excluded or that you followed the rules too literally and obviously the thing they meant ought to happen even though that's not what they actually wrote. It's lose-lose for compiler vendors.


I am one of the "stop exploiting UB" camp. [1]

I agree that some of us are unreasonable, but I do recognize that DWIM is not feasible.

I just want compilers to treat UB the same as unspecified behavior, which cannot be assumed away.

[1]: https://gavinhoward.com/2023/08/the-scourge-of-00ub/


You mention that "Note that those surprised programmers are actually Rust compiler authors" but I can't figure out which of the many links is to some "surprised programmers" who are actually rustc authors, and so I don't even know if you're right.

Rust's safe subset doesn't have any UB, but the unsafe Rust can of course cause UB very easily, because the rules in Rust are extremely strict and only the safe Rust gets to have the compiler ensure it doesn't break the rules. So it seems weird for people who work on the compiler guts to be "surprised".


I'm a Rust compiler author, and I'm fully in favor of "UB exploitation". In fact, LLVM should be doing more of it. LLVM shouldn't be holding back optimizations in memory-safe languages for edge cases that don't really matter in practice.



I don't see any surprised compiler authors in that thread. The reporter immediately suggests the correct underlying reason for the bug and another compiler author even says that they wondered how long it would take for someone to notice this.

Even if you read any surprise into their messages they wouldn't be surprised that C does something completely unreasonable, they would be surprised that LLVM does something unreasonable (by default).


Wait, that's not even linked in your post AFAICT. It's also about an LLVM bug and not in fact exploiting UB.

"LLVM shouldn't miscompile programs" is uncontroversial, but claiming that these miscompilations are somehow "Exploiting Undefined Behaviour" is either incompetent or an attempt to sell your position as something it isn't.


> I just want compilers to treat UB the same as unspecified behavior, which cannot be assumed away.

Unspecified behavior is defined as the "use of an unspecified value, or other behavior where this International Standard provides two or more possibilities and imposes no further requirements on which is chosen in any instance".

Which (two or more) possibilities should the standard provide for out-of-bounds writes? Note that "do what the hardware does" wouldn't be a good specification because it would either (a) disable all optimizations or (b) be indistinguishable from undefined behavior.


There is also a completely different scenario where out-of-bounds writes aren't undefined behavior anymore. And that's when you've manually defined the arrays in an assembly source file, and exported their symbols. In that situation, you know what's before the array or after the array, so doing pointer math into an adjacent area has a well known effect.


Can you use the remaining SIMD lanes for processing independent data streams?

Think encoding or decoding non-overlapping parts of a video.


So, you can't necessarily do that because video is compression, and compression means not predictable. (If it's predictable it's not compressed well enough.)

That means you have to stick to inside the current block. But there are some tricks; like for an IDCT there's a previous stage where you can rearrange the output memory elements for free, so you can shuffle things as you need to fit them into vectors.


It implements tmux control mode. It's very useful when working with a remote server.

No other terminal implements it AFAIK.


What does tmux control mode do in practice? I use both (iterm2 and tmux), but not for this specific reason. I have just used both as a default for a long time.

So, what magic am I using without realizing it?


tmux metaphors are implemented in gui. tmux tabs are iterm2 tabs, tmux windows are iterm2 windows, etc. attach/detach and so on will restore layouts.

i believe the session can even be shared with a normal tmux session.


Yes, this was such a nice feature when I used a Mac. And indeed the session seamlessly works as a normal tmux session. I believe WezTerm does tmux-style terminal multiplexing, but doesn't integrate with tmux.


The security team cares about minimizing risks to the company and to their own careers.

Deviating from what everybody else is doing makes it so that the burden of proving that your policies are sane is on you and if anything bad happens your head is the first to roll.

You use CrowdStrike and the company lost millions of dollars due to the outage? That's not your problem, you applied industry standard practices.

You don't use CrowdStrike and the company got hacked? You will have to explain to the executives and the board why you didn't apply industry standard practices and you will be fired.


You can still have use-after-free errors when you use array indices. This can happen if you implement a way to "free" elements stored in the vector. "free" should be interpreted in a wide sense. There's no way for Rust to prevent you from marking an array index as free and later using it.


> There's no way for Rust to prevent you from marking an array index as free and later using it.

I 2/3rds disagree with this. There are three different cases:

- Plain Vec<T>. In this case you just can't remove elements. (At least not without screwing up the indexes of other elements, so not in the cases we're talking about here.)

- Vec<Option<T>>. In this case you can make index reuse mistakes. However, this is less efficient and less convenient than...

- SlotMap<T> or similar. This uses generational indexes to solve the reuse problem, and it provides other nice conveniences. The only real downside is that you need to know about it and take a dependency.


The consequences of use-after-free are different for the two.

In rust it is a logic error, which leads to data corruption or program panics within your application. In C it leads to data corruption and is an attack vector for the entire machine.

And yes, while Rust itself doesn’t help you with this type of error, there are plenty of Rust libraries which do.


Why does an optimizing compiler introduce nondeterminism?

In my mind an optimizing compiler is a pure function that takes source code and produces an object file.


Well, lot of things can influence here. Multithreaded build, PGO, or even the different access order of the hash table inside the code optimizer can be a factor. Things are getting probalistic and thus somewhat nondeterministic: the build itself is nondeterministic but the runtime/final execution is deterministic


The problem here is that Emacs is the Elisp interpreter. They are the same thing.

Emacs would have to start another process for Elisp analysis and code completion. That would be a massive reachitecture of the system.


Wouldn't it be sufficient to "just" write a kind of context manager that watches the macro expansion and then looks at each step what's being done and divvies that up into safe and unsafe execution, so that at least the example of the article

  (rx (eval (call-process "touch" nil nil nil "/tmp/owned")))
doesn't just automatically run? Obviously it's a lot of work depending on how sophisticated you want that to be but you probably don't need to rearchitect much.


For folks that have never poked around in emacs, the specific difficulty will be that the odds are very high that if you are in an emacs lisp file, you are almost certainly going to want to edit what emacs itself is doing.

I'm specifically talking about scenarios such as "you set debug-on-error."

To that end, the proposal would probably be something like "flymake/flycheck" use a child emacs process to query the code, but the user evaluating the code would still be done in the main emacs?


It biases results towards people that "studied how to do the test" rather than studying the material that is evaluated.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: