When we run a regression to estimate coefficients, each estimate is accompanied by a standard error. You can think of the standard error as how much uncertainty we have about that estimate. One way of looking at it is: if we were to re-do the regression thousands of times—each time using a different random sample of the underlying population—what would the standard deviation of all the different estimates be?
The standard error depends mostly on two things: how large is our sample size, and how tightly correlated are the X and Y variables? Generally speaking, larger sample sizes lead to lower standard errors, and noisier data leads to higher standard errors.
The true model is Y = Xβ + ε, where ε has mean zero and standard deviation σ.