The easiest way to read it is "there exists a function h in O(1) such that f(x) <= g(x) + h(x)."
I think first we should teach "f in O(g)" notation, then teach the above, then observe that a special case of the above is the "abuse of notation" f(x) = O(g(x)).
I think the confusion is because strictly speaking $f(x) = O(g(x))$ is an abuse of notation. $O(g(n)), \Theta(g(n))$ and friends are sets. We can't say that a function equals a set, or that a function "is less" than another function, but notoriously mathematics runs on javascript, so we try to do something instead of giving a type error.
Here "is less" is interpreted as "eventually less for all values" and "plus a set" is interpreted as "plus any function of that set".
I never liked this notation for asymptotics and I always preferred the $f(x) \in O(g(x))$ style, but it's just notation in the end.
To me it seems similar to the + C on an antiderivative (or more generally, quotient objects). Technically, you are dealing with an equivalence class of functions, so a set. But it's usually counterproductive to think of it that way (and when you study this stuff properly, one of the first things you do is prove that you (usually) don't need to, and can instead use an arbitrary representative as a stand-in for the set), so you write F(x)+C.
I think the Landau notation is a bit more finicky with the details. When it's really a quotient (like modular arithmetic) I'm with you, but here $O()$ morally means "at most this" and often you have to use the "direction of the inequality" to prove complexity bounds, so I'm more comfortable with the set notation. But again, it's just notation, I could use either.
Although, when I learned foundations of mathematics, every function was a set, and if you wanted them, you'd get plenty of junk theorems from that fact.
I feel its not that bad an abuse of notation as kinda consistent with other areas of mathematics -
A coset, quotients r + I, affine subspaces v + W, etc. Not literal sets but comparing some representative with a class label, and the `=, +` is defined not just on the actual objects but on some other structure used to make some comparison too.
> computer science students should be familiar with the standard f(x)=O(g(x)) notation
I have always thought that expressing it like that instead of f(x) ∈ O(g(x)) is very confusing. I understand the desire to apply arithmetic notation of summation to represent the factors, but "concluding" this notation with equality, when it's not an equality... Is grounds for confusion.
O(1) just means "a bounded function (on a neighborhood of infinity)". Unlike f(x), which refers to some function by name, O(1) refers to some function by a property it has. It's the same principle at work in "even + odd = odd".
Programmers wringing their hands over the meaning of f(x)=O(g(x)) never seem to have manipulated any expression more complex than f(x)=O(g(x)).
I think first we should teach "f in O(g)" notation, then teach the above, then observe that a special case of the above is the "abuse of notation" f(x) = O(g(x)).
Here "is less" is interpreted as "eventually less for all values" and "plus a set" is interpreted as "plus any function of that set".
I never liked this notation for asymptotics and I always preferred the $f(x) \in O(g(x))$ style, but it's just notation in the end.
A coset, quotients r + I, affine subspaces v + W, etc. Not literal sets but comparing some representative with a class label, and the `=, +` is defined not just on the actual objects but on some other structure used to make some comparison too.
I have always thought that expressing it like that instead of f(x) ∈ O(g(x)) is very confusing. I understand the desire to apply arithmetic notation of summation to represent the factors, but "concluding" this notation with equality, when it's not an equality... Is grounds for confusion.
f(x) = g(x) + O(1)
f(x) - g(x) = O(1)
Programmers wringing their hands over the meaning of f(x)=O(g(x)) never seem to have manipulated any expression more complex than f(x)=O(g(x)).