We all learned in high school that $\sin(0)=\sin(\pi)=0$. But what if we ask python?
import math print("A:", math.sin(0.)) print("B:", math.sin(math.pi))
The output I get on my computer is
A: 0.0 B: 1.2246467991473532e-16
and I'm willing to bet you'll get the same answer on your computer if you copy/paste the above code snippet. A is obviously correct, but what's going on with B? $1.22\times 10^{-16}$ is definitely small, but it's not zero. Don't be alarmed, your computer is not broken. Furthermore, this is not a problem with python; you'll get the same answer using C++, Java, or any other computer language. The reason is quite simple:
Your computer can't do real math because it can't use real numbers. Be careful!
Storing integers inside a computer is easy, you just write them in base-2. This immediately gives you rational numbers (also known as fractions) because you can store the numerator and denominator as integers. But how do you store real numbers (by which I really mean irrational numbers, since we already covered the rational subset). Take, for example, the most well known irrational number: $\pi$. Being irrational, $\pi$ requires an infinite number of digits to represent (see the first million digits here). This is the case in any base, so if we wanted to store $\pi$ in base-2, we'd need an infinite number of bits in memory. Clearly this is not practical.
Instead, computers use floating point arithmetic, which is essentially scientific notation $$\underbrace{3.14159}_{\mathrm{mantissa}}\times10^0.$$A floating point system is primarily defined by its precision $P$ (the number of digits stored in the mantissa). Since only one digit can precede the dot, numbers smaller than 1 or larger than 10 are represented by changing the integer exponent. To get a more accurate computational result, you simply increase the precision $P$ of the numbers you use to calculate it. Modern computers universally conform to the floating point standard IEEE 754, which lays out the rules for floating point arithmetic.
This brings us back to our python test. Answer A is what we expect using real numbers because IEEE 754 floating point numbers can exactly store 0, and the $\sin()$ function knows that $\sin(0.0)=0.0$. But B uses $\texttt{math.pi}$, which is not $\pi$ but the closest floating point equivalent after $\pi$ is truncated and rounded. For this reason, answer B is actually the correct answer to the question we posed. We cannot use $\pi$ as an input, only $\texttt{math.pi}$; answer B is the $\sin()$ of this approximate $\pi$. So how wrong is $\texttt{math.pi}$? The Taylor series of $\sin(x)$ near $x=\pi$ is\begin{equation}\sin(x)=-(x-\pi)+\mathcal{O}((x-\pi)^2).\end{equation}Plugging in answer B and solving for $x$ we get\begin{equation}\texttt{math.pi}=\pi - 1.2246467991473532\times 10^{-16},\end{equation}which is slightly less than $\pi$.
TLDR: If you are a scientist, then many of your results are probably numerical and computer generated. But your computer can't do real number arithmetic because it can't use real numbers. When your computer says the answer is 3e-16, this could be a very precise result, and the answer could indeed be a small, non-zero number. But it is more likely that 3e-16 comes from a rounding error, and the actual answer should be zero. For this reason, some expressions are very bad, and should not be used (e.g. $1-\cos(x)$). Understanding why such expressions are bad requires a deeper look into floating point arithmetic. I highly recommend reading David Goldberg's "What Every Computer Scientist Should Know About Floating-Point Arithmetic" for starters, and see where it takes you. Ultimately, you should assume that every numerical result has some floating point error. And if you're not careful, this floating point error can become very large indeed. So be careful.