# How much math do you need for Computer Science?

Over the course of my studies, I’ve taken

so many classes: over 240 credit hours throughout my bachelors, masters, and PhD. That’s like 80 classes. Was it all necessary? Perhaps not all of it, but I’ve learned many

useful things taking those classes. In this mini-series, we’re going to talk about

some of those fundamental courses that many universities offer in their Computer Science

& Engineering programs. We will talk about what these classes are

about and what you should expect from them. Let’s start with math classes. How much math do you need for computer science? Do you even need it at all? Is it even relevant to computer science? The short answer is yes, it is relevant. Discrete math, for example, focuses on some

of the fundamental concepts in Computer Science, such as set theory, graph theory, algorithm

complexity, combinatorics, logic, and so on. Many computer programs run in a deterministic

way, which means that no matter how many times you run a program you should expect the same

results. Logic deals with such deterministic systems. However, not everything in computer science

is deterministic, especially when you are building machines that make decisions based

on noisy real-life data. That’s where probability and statistics come

into play. Probability theory provides a set of rules

to infer likelihoods rather than inferring whether a statement is true or false. Machine learning and data science heavily

rely on probability theory. Probability and statistics are useful in general,

but they are a must-know if you are interested in AI research. Speaking of AI, you should definitely have

at least a basic understanding of linear algebra if you want to develop deep learning algorithms. Even if you are not interested in AI, linear

algebra is essential for understanding many other subfields of computer science. Are you interested in game development and

computer graphics? Computer graphics make heavy use of linear

algebra. When you play a video game or render a 3D

model, what happens in the backend is a bunch of matrix multiplications. That’s why GPUs are very good at matrix multiplications. And that’s how GPUs powered artificial intelligence

as a byproduct. Many machine learning models do a lot of matrix

multiplications until they learn to extract useful information from large-scale data. If you are curious to know more about how

this learning happens, check out my earlier deep learning crash course series. How about calculus? Well, as far as I know, it’s a required course

everywhere. So you’ll have to take it anyway. But in general, basic knowledge of limits,

differentials, and integrals is useful in general and may come in handy in many situations

when you try to solve a technical problem. For example, if you are building a controller

to keep your room at a particular temperature, your algorithm will need to compute the derivative

and integral of your room temperature over time. Another example: to train an artificial neural

network, you need to compute the derivative of the error with respect to the weights. Luckily, you don’t need to do this by hand,

but it’s still good to know how it’s done to have a better understanding of what your

algorithm actually does. Speaking of derivatives, it’s very common

in computer science to compute derivatives and integrals numerically. That’s where numerical analysis comes in handy. When you work with discrete data, integration

becomes calculating the cumulative sum, and differentiation becomes subtracting consecutive

numbers. Even if you work with continuous data, it

might still be more computationally feasible to approximate the result of a mathematical

function using an iterative process than deriving it analytically. How about game theory? It must have something to do with video games,

right? Well, not really! Or, a second thought, it actually is used

in some video games. Game theory models interactions between rational

decision makers. Therefore, you can use it to model the behavior

of non-player characters. For example, to program a chess-playing agent,

you can use the minimax algorithm, which minimizes the possible loss for the worst case. Last but not least, information theory also

has a big role in many branches of computer science and engineering. Arguably, the most important concept in information

theory is the concept of entropy which measures how much information is present in a signal. The lower the entropy, the more compressible

a signal becomes. Information theory is used in data compression,

communication, and storage, as well as statistics and machine learning. You might ask is this all necessary for computer

science? Well, it depends on what you want to do. Perhaps you don’t need to master all of this

but in any case, I would say at least a basic understanding of these concepts is necessary. There’s a misconception that computer science

and programming are the same things. In fact, programming is a tool, whereas computer

science is a field of study. You can very well work as a full-stack developer

and implement well-defined problems without having to know any math. But to specialize in a topic, or to do research,

and to find optimal solutions to novel problems, you will certainly need at least some math. Some experienced programmers might disagree

and say they do all of this without having to know any math. If you are such an experienced programmer,

perhaps you have already internalized the math behind the algorithms in a practical

way without explicitly studying them. Alright, that’s all for today. In the next video, we will talk about the

core computer science and engineering classes such as data structures and algorithms. If you liked this video, please subscribe

for more videos. And as always, thanks for watching, stay tuned,

and see you next time.