NHacker Next
login
▲Can LLMs do randomness?rnikhil.com
61 points by whoami_nr 60 days ago | 69 comments
Loading comments...
sgk284 59 days ago [-]
Fun post! Back during the holidays we wrote one where we abused temperature AND structured output to approximate a random selection: https://bits.logic.inc/p/all-i-want-for-christmas-is-a-rando...
onionisafruit 58 days ago [-]
I enjoyed that.

When you asked it to choose by picking a random number between 1 and 4, it skewed the results heavily to 2 and 3. It could have interpreted your instructions to mean literally between 1 and 4 (not inclusive).

sgk284 57 days ago [-]
Yea, absolutely. That's a good point. We could have phrased that more clearly.
LourensT 58 days ago [-]
could you use structured output to make a more efficient estimator for the logits based?
sgk284 57 days ago [-]
The two mechanisms are a bit disjoint, so I don't think it's the right tool to do so. Though it could have been an interesting experiment.
captn3m0 59 days ago [-]
Wouldn’t any randomness (for a fixed combination of hardware and weights) be a result of the temperature and any randomness inserted at inference-time?

Otherwise, doing a H/T comparison is just a proxy to what the underlying token probabilities are and the temperature configuration (+hardware differences for a remote-hosted model).

whoami_nr 58 days ago [-]
Author here. Yeah totally agreed. The more rigorous way to do this would be to use a fixed seed and temp and in a local model setting and then sample the logprobs and then analyse that data.

I had an hour to kill and did this experiment.

delusional 58 days ago [-]
Congratulations, this was all a test to see if there were anyone on HN with any knowledge of how LLMs work, and you gave the correct answer.
moffkalast 58 days ago [-]
I was gonna say floating point errors might contribute especially at fp16 and fp8, but those are technically deterministic.
58 days ago [-]
DimitriBouriez 59 days ago [-]
One thing to consider: we don’t know if these LLMs are wrapped with server-side logic that injects randomness (e.g. using actual code or external RNG). The outputs might not come purely from the model's token probabilities, but from some opaque post-processing layer. That’s a major blind spot in this kind of testing.
avianlyric 58 days ago [-]
The core of an LLM is completely deterministic. The randomness seen in LLM output is purely the result of post processing the output of the pure neural net part of the LLM, which exists explicitly to inject randomness into the generation process.

This what the “temperature” parameter of an LLM controls. Setting the temperature of an LLM to 0 effectively disables that randomness, but the result is a very boring output that’s likely to end up caught in a never ending loop of useless output.

orbital-decay 59 days ago [-]
You're right, although tests like this have been done many times locally as well. This issue comes from the fact that RL usually kills the token prediction variance, disproportionately narrowing it to 2-3 likely choices in the output distribution even in cases where uncertainty calls for hundreds. This is also a major factor behind fixed LLM stereotypes and -isms. Base models usually don't exhibit that behavior and have sufficient randomness.
remoquete 59 days ago [-]
Agreed. These tests should be performed on local models.
Repose0941 59 days ago [-]
Is randomness even possible? You can't technically prove it just see it, more likely to be close to that, in https://www.random.org/#learn they talk a little about this
sebstefan 58 days ago [-]
That's an interrogation as old as time
whoami_nr 59 days ago [-]
Author here. I know 0-10 is one extra even number. I also just did this for fun so don't take the statistical significance aspect of it very seriously. You also need to run this multiple times with multiple temperature and top_p values to do this more rigorously.
segh 58 days ago [-]
Cool experiment! My intuition suggests you would get a better result if you let the LLM generate tokens for a while before giving you an answer. Could be another experiment idea to see what kind of instructions lead to better randomness. (And to extend this, whether these instructions help humans better generate random numbers too.)
Mr_Modulo 58 days ago [-]
In the summary at the top it says you used 0-10 but then for the prompt it says 1-10. I had assumed the summary was incorrect but I guess it's the prompt that's wrong?
dr_dshiv 59 days ago [-]
Oh, surprising that Claude can do heads/tails.

In a project last year, I did a combination of LLMs plus a list of random numbers from a quantum computer. Random numbers are the only useful things quantum computers can produce—and that is one thing LLMs are terrible at

david-gpu 58 days ago [-]
During my tenure at NVidia I met a guy that was working on special versions of to the kernels that would make them deterministic.

Otherwise, parallel floating point computations like these are not going to be perfectly deterministic, due to a combination of two factors. First, the order of some operations will be random due to all sorts of environmental conditions such as temperature variations. Second, floating point operations like addition are not ~~commutative~~ associative (thanks!!), which surprises people unfamiliar with how they work.

That is before we even talk about the temperature setting on LLMs.

enriquto 58 days ago [-]
> floating point operations like addition are not commutative

maybe you meant associative? Floating point addition is commutative: a+b is always equal to b+a for any values of a and b. It is not associative, though: a+(b+c) is in general different to (a+b)+c, think what happens if a is tiny and b,c are huge, for example.

david-gpu 58 days ago [-]
Sorry, yes, I meant associative. Thanks for the important correction.

To think that I used to do this for a living...

simulator5g 58 days ago [-]
How is that any different? 1+(2+3) = 6

(1+2)+3 = 6

0.000001+(200000+300000) = 500000.000001

(0.000001+200000)+300000 = 500000.000001

david-gpu 58 days ago [-]
You need to take it a step further, since e.g. 64-bit floats have a ton of mantissa bits.

Here's an example in python3.

    >>> "{:.2f}".format(1e16 + (1 + 1))
    '10000000000000002.00'
    >>> "{:.2f}".format((1e16 + 1) + 1)
    '10000000000000000.00'
enriquto 57 days ago [-]
take b and c with opposite signs
jansan 59 days ago [-]
What I find more important is the ability to get reproducible results for testing.

I do not know about other LLMs, but Cohere allows setting a seed value. When setting the same seed value it will always give you the same result for a specific prompt (of course unless the LLM gets an update).

OTOH I would guess that they normally simply generate a random seed value on the server side when processing a prompt, and it depends on their random number generator how random that really is.

ekianjo 59 days ago [-]
That's expected behavior when you run LLM locally with a fixed seed and temperature at zero
bestest 59 days ago [-]
I would suggest them to repeat the experiment while including sets from answers to "choose heads or tails" AND "choose tails or heads", ditto for numbers or rephrase the question to not include a "choice" (choose from 0 to 9 (btw, they're asking to choose from 0 to 10 inclusive, which is inherently wrong as the even subset is bigger in this case)), but rather "choose a random integer".
GuB-42 58 days ago [-]
Is the LLM reset between each event?

If LLMs are anything like people, I would expect a different result depending on that. The idea that random events are independent is very unintuitive to us, resulting in what we call the Gambler's Fallacy. LLMs attempts at randomness are very likely to be just as biased, if not more.

maaaaattttt 58 days ago [-]
I think randomness needs to be better defined. In the article it seems to be that randomness should be an evenly distributed type of event occurences. I agree that it is very unintuitive for us as, I believe, we assume randomness to be any sequence of event that doesn't follow any known/recognizable pattern. Show a section of the Fibonacci to a 10 yo kid and they will most likely find the sequence of numbers to be random (maybe they will note that it is always increasing, but that's it). Even in this article the fact that o1 always throws "heads" could indicate that it "knows" what randomness is, and is then just being random by throwing only heads.

I personnaly would define ideal randomness as a behavior that is fundamentally uncomputable and/or cannot be expressed as a mathematical function. If this definition holds than the question cannot apply to LLMs as they are a just (big) mathematical function.

mrdw 58 days ago [-]
They should measure for different temperatures, where at 0 it will be the same output every time, but it's interesting to see how results will change for different temperatures from 0.01 to 2. But, I'm not sure if temperature is implemented the same way in all llms
baalimago 58 days ago [-]
I'd be interested to see the bias in random character generation. It's something which would be closer to the domains of LLMs, seeing that they're 'next word generators' (based on probability).

How cryptographically secure would an LLM rng seed generator be?

ganiszulfa 58 days ago [-]
LLMs are acting like humans, I believe humans will have biases if you ask them to do random things :)

On a more serious note, you could always adjust the temperature so they behave more randomly.

hleszek 59 days ago [-]
Can humans do randomness? Obviously not and I expect if you ask people for a random number, then odd numbers will predominate.
whoami_nr 59 days ago [-]
Veritasium did a video on this. Most people guess 37 when asked to pick between 1-100
hoseja 58 days ago [-]
100/e rounded is 37

Pretty good.

boroboro4 58 days ago [-]
It would be nice to inspect logits data/distribution. How close the output of it to uniform is the question.
naghing 58 days ago [-]
Why not provide randomness to LLMs instead of expecting them to produce it?
evertedsphere 59 days ago [-]
0-10 inclusive is one extra even
p1dda 59 days ago [-]
LLMs doesn't even understand basic logic dude, or physics or gravity
ianarchy 58 days ago [-]
[dead]
58 days ago [-]
edding4500 58 days ago [-]
This is silly. Behind an LLM sits a deterministic algorithm. So no, it is not possible without ibserting randomness by other means into the algo, for example by setting temperatures for gradient descent.

Why are all these posts and news about LLMs so uninformed? This is human built technology. You can actually read up how these things work. And yet they are treated as if it were an alien species that must be examined by sociological means and methods where it is not necessary. Grinds my gears every time :D

whoami_nr 58 days ago [-]
Author here. I know it’s silly. I understand to some extent how they work. I was just doing this for fun. Took about 1hr for everything and it all started when a friend asked me whether we can use them for a coin toss.
edding4500 58 days ago [-]
Sorry, I did not mean to downtalk the blog post :) I did not mean silly as in stupid. It's rather the title that I think is misleading. Can a LLM do randomness? Well, PRNGs are part of it so the question boils down whether PRNGs can do randomness. As mentioned here before, setting the temperature of say GPT-2 to zero makes the output deterministic. I was 99% sure that you as the author knew about this :)
alew1 58 days ago [-]
The algorithms are not deterministic: they output a probability distribution over next tokens, which is then sampled. That’s why clicking “retry” gives you a different answer. An LM could easily (in principle) compute a 50/50 distribution when asked to flip a coin.
aeonik 58 days ago [-]
They are still deterministic. You can set temperature to zero to get the output to be consistent, but even the temperature usually uses a seed or psuedo random number generator. Though this would depend on the implementation.

https://github.com/huggingface/transformers/blob/d538293f62f...

dist-epoch 58 days ago [-]
As someone which tried really hard to get deterministic outcome out of them, they really are not.

Layers can be computed in slightly different orders (due to parallelism), on different GPU models, and this will cause small numerical differences which will compound due to auto-regression.

delusional 58 days ago [-]
Could someone elighten me on how to compute layers in parallel? I was under the impression that the linearity of the layer computation was why we were mostly bandwidth constrained. If you can compute the layers In parallel then why do we need high bandwidth?
dist-epoch 58 days ago [-]
https://developer.nvidia.com/blog/mastering-llm-techniques-i...
throwawaymaths 58 days ago [-]
all things being equal, if you fix all of those things and the hardware isn't buggy, you get the same results, and I've set up CI with golden values that requires this to be true. indeed, occasionally you have to change golden values depending on implementation but mathematically the algorithm is deterministic, even if in practice determinidm requires a bit more effort.
dkersten 58 days ago [-]
But the reality is that all things aren’t equal and you can’t fix all of those things, not in a way that is practical. You’d have to run everything serially (or at least in a way you can guarantee identical order) and likely emulated so you can guarantee identical precision and operations. You’ll be waiting a long time for results.

Sure, it’s theoretically deterministic, but so are many natural processes like air pressure, or the three body problem, or nuclear decay, if only we had all the inputs and fixed all the variables, but the reality is that we can’t and it’s not particularly useful to say that well if we could it’d be deterministic.

orbital-decay 58 days ago [-]
It's definitely reachable in practice. Gemini 2.0 Flash is 100% deterministic at temperature 0, for example. I guess it's due to the TPU hardware (but then why other Gemini models are not like that...).
throwawaymaths 58 days ago [-]
Anyways, this is all immaterial to the original question, which is if LLMs can do randomness [for single user with a given query], so from a practical standpoint the question itself needs to survive "all things being equal", that is is to say, suppose I stand up an LLM on my own GPU rig, and the algorithmic scheduler doesn't do too many out of order operations (very possible depending on the ollama or vllm build).
orbital-decay 58 days ago [-]
Setting the temperature to zero reduces the process to greedy search, which does a lot more things to the output than just making it non-random.
im3w1l 58 days ago [-]
Yes so it's basically asking whether that probability distribution is 50/50 or not. And it turns out that it's sometimes very skewed. Which is a non-obvious result.
kurikuri 58 days ago [-]
So, what ‘algorithms’ are you talking about? The randomness comes from the input value (the random seed). Once you give it a random seed, a special number generator (PRNG) makes a sequence from that seed. When the LLM needs to ‘flip a coin,’ it just consumes a value from the PRNG’s output sequence.

Think of each new ‘interaction’ with the LLM as having two things that can change: the context and the PRNG state. We can also think of the PRNG state as having two things: the random seed (which makes the output sequence), and the index of the last consumed random value from the PRNG. If the context, random seed, and index are the same, then the LLM will always give the same answer. Just to be clear, the only ‘randomness’ in these state values comes from the random seed itself.

The LLM doesn’t make any randomness, it needs randomness as an input (hyper)parameter.

orbital-decay 58 days ago [-]
The raw output of a transformer model is a list of logits, confidence scores for each token in its vocabulary. It's only deterministic in this sense (same input = same scores). But it can easily assign equal scores to 1 and 0 and zero to other tokens, and you'll have to sample it randomly to produce the result. Whether you consider it external or internal doesn't matter, transformers are inherently probabilistic by design. Randomness is all they produce. And typically they aren't trained with the case of temperature 0 and greedy sampling in mind.
kurikuri 48 days ago [-]
> But it can easily assign equal scores to 1 and 0 and zero to other tokens, and you’ll have to sample it randomly to produce the result. Whether you consider it external or internal doesn’t matter, transformers are inherently probabilistic by design.

The transformer is operating on the probability functions in a fully deterministic fashion, you might be missing the forest for the trees here. In your hypothetical, the transformer does not have a non-deterministic way of selecting the 1 or 0 token, so it will rely on a noise source which can. It does not produce any randomness at all.

orbital-decay 37 days ago [-]
It's one way to look at it, but consider that you need the noise source in case 1 and 0 are strictly equal, necessarily. You can't tell which one is the answer until you decided randomly.
kurikuri 36 days ago [-]
Right, so the LLM needs some randomness to make that decision. The LLM performs a series of deterministic operations until it needs the randomness to make this decisions, there is no randomness within the LLM itself.
kbelder 58 days ago [-]
But the randomness doesn't directly translate to a random outcome in results. It may randomly choose from a thousand possible choices, where 90% of the choices are some variant of 'the coin comes up heads'.

I think a more useful approach is to give the LLM access to an api that returns a random number, and let it ask for one during response formulation, when needed.

throwawaymaths 58 days ago [-]
i think gp would consider the sampling bit a part of the API, not a part of the algorithm.
kerkeslager 58 days ago [-]
The algorithms are definitely not deterministic. That said I agree with your general point that experimenting on LLMs as if they're black boxes with unknown internals is silly.

EDIT: I'm seeing another poster saying "Deterministic with a random seed?" That's a good point--all the non-determinism comes from the seed, which isn't particularly critical to the algorithm. One could easily make an LLM deterministic by simply always using the same seed.

dist-epoch 58 days ago [-]
> all the non-determinism comes from the seed

not fully true, when using floating point the order of operations matters, and it can vary slightly due to parallelism. I've seen LLMs return different outputs with the same seed.

onionisafruit 58 days ago [-]
That’s an interesting observation. Usually we try to control that, but with LLMs the non-determinism is fine.

It seems like that would make it hard to unit test LLM code, but they seem to be managing.

kerkeslager 58 days ago [-]
Oh, that's really interesting. Hadn't thought of that.
_joel 58 days ago [-]
Deterministic with a random seed?
edding4500 58 days ago [-]
But then the random seed is the source of randomness and not the training data. So the question "Can LLMs do randomness" would actually boil down to "Can PRNGs do randomness".
chaoz_ 58 days ago [-]
"You can actually read up on how these things work."

While you can definitely read about how some parts of a very complex neural network function, it's very challenging to understand the underlying patterns.

That's why even the people who invented components of these networks still invest in areas like mechanistic interpretability, trying to develop a model of how these systems actually operate. See https://www.transformer-circuits.pub/2022/mech-interp-essay (Chris Olah)

kaibee 58 days ago [-]
Yes, but sometimes asking dumb questions is the first step to asking smart questions. And OP's investigation does raise some questions to me at least.

1. Give a model a context with some # of actually random numbers and then ask it to generate the next random number. How random is that number? Repeat N times, graph the results, is there anything interesting about the results?

2. I remember reading about how brains/etc are kinda edge-balanced chaotic systems. So if a model is bad at outputting random numbers (ie: needs a very high temperature for the experiment from step 1 to produce a good distribution of random numbers) What if anything does that tell us about the model?

3. Can we add a training step/fine-tuning step that makes the model better at the experiment from step #2? What effect does that have on its benchmarks?

I'm not an ML researcher, so maybe this is still nonsense.

joejoo 59 days ago [-]
[flagged]