0%

3. Data Manipulation

In this article, we will discuss four traditional data manipulation problems in SQL / Pandas. They are from

3.1: https://leetcode.cn/problems/nth-highest-salary/

3.2: https://leetcode.cn/problems/rank-scores/

3.3: https://leetcode.cn/problems/rearrange-products-table/

3.4: https://leetcode.cn/problems/calculate-special-bonus/

In our article, we won’t use the same data in these question. We’ll use a more simpler one to illustrate the functions.

3.1 TopK & n-th largest

TopK means selecting top k rows according to some rules from the table. n-th largest means selecting the just n-th largest row according to some rules from the table

3.1.1 MySQL

To select some first couple of rows from the table, you can use the LIMIT keyword. LIMIT M, N will select from M row (0-index), a total of N rows. See the following example and explanation.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
-- table is
+-------+-------+
| name | score |
+-------+-------+
| Alice | 80 |
| Bob | 90 |
| Cindy | 100 |
| David | 70 |
| Ella | 60 |
| Frank | 80 |
+-------+-------+

-- select from 1st row, a total of 2 rows:
select * from Scores LIMIT 1, 2;
+-------+-------+
| name | score |
+-------+-------+
| Bob | 90 |
| Cindy | 100 |
+-------+-------+

-- select total of 0 rows means nothing
select * from Scores LIMIT 1, 0;
+------+-------+
| name | score |
+------+-------+

-- if starting index > len(table), nothing would be returned
select * from Scores LIMIT 7, 1;
+------+-------+
| name | score |
+------+-------+

-- select from minux number means select from first row
select * from Scores LIMIT -17, 2;
+-------+-------+
| name | score |
+-------+-------+
| Alice | 80 |
| Bob | 90 |
+-------+-------+

Therefore, it would be quite easy to implement the TopK and n-th largest. For example, if we want to find top 2 grade students and the 2nd highest student in this table:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
-- find top 2 grade students
select * from Scores order by score desc LIMIT 0, 2;
+-------+-------+
| name | score |
+-------+-------+
| Cindy | 100 |
| Bob | 90 |
+-------+-------+

-- find the 2nd highest student (note that it is 0-index)
select * from Scores order by score desc LIMIT 1, 1;
+------+-------+
| name | score |
+------+-------+
| Bob | 90 |
+------+-------+

3.1.2 Pandas

In pandas, we have the method pd.DataFrame.nlargest(N, col_name) to find the top N rows. Note that, after executing this method, the result table is already sorted descending. Access the last one row will just be the n-th largest.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# the df is
name score
0 Alice 80
1 Bob 90
2 Cindy 100
3 David 70
4 Ella 60
5 Frank 80

# find top 2 grade students
df.nlargest(N, 'score')
# result is:
name score
2 Cindy 100
1 Bob 90

# find the 2nd highest student (note that it is 0-index)
Scores.nlargest(N, 'score').iloc[-1]
# result is:
name Bob
score 90

3.2 rank, dense_rank

This question requires us to build a dense rank. That is, when there are rows that “score” are same, they should both appear and they should have the same score. Fortunately, we have built-in functions to help us achieve that.

3.2.1 MySQL

There is a function called DENSE_RANK(), which can be used to create a new column based on the ranking information.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
-- The table is:
+----+-------+
| id | score |
+----+-------+
| 1 | 3.5 |
| 2 | 3.65 |
| 3 | 4 |
| 4 | 3.85 |
| 5 | 4 |
| 6 | 3.65 |
+----+-------+

-- Do the ranking (the problems requires the "dense rank")
SELECT S.score, DENSE_RANK() OVER (
ORDER BY S.score DESC
) AS 'rank'
FROM
Scores S;
-- result is:
+-------+------+
| score | rank |
+-------+------+
| 4 | 1 |
| 4 | 1 |
| 3.85 | 2 |
| 3.65 | 3 |
| 3.65 | 3 |
| 3.5 | 4 |
+-------+------+

Usually, our ranking is not like this: if we have two 4.0 students, the rank of 3.85 student should be 3, not 2. The RANK() functions helps in this case:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
SELECT S.score, RANK() OVER (
ORDER BY S.score DESC
) AS 'rank'
FROM
Scores S;
-- result is:
+-------+------+
| score | rank |
+-------+------+
| 4 | 1 |
| 4 | 1 |
| 3.85 | 3 |
| 3.65 | 4 |
| 3.65 | 4 |
| 3.5 | 6 |
+-------+------+

3.2.2 Pandas

In pandas, such operation is quite easy: we have the pd.Series.rank() function. This function has multiple parameters. The most important two are method and ascending. When method is 'dense', it performs as dense_rank. When method is min, it is the common ranking (meaning that when we have multiple same score, we should take minimum rank for all of them). ascending controls whether 1 is assigned to highest or lowest.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# scores is:
id score
0 1 3.5
1 2 3.65
2 3 4.0
3 4 3.85
4 5 4.0
5 6 3.65

# create a new column, assign the dense_rank:
scores['rank'] = scores['score'].rank(method='dense', ascending=False)
# result is:
id score rank
0 1 3.5 4.0
1 2 3.65 3.0
2 3 4.0 1.0
3 4 3.85 2.0
4 5 4.0 1.0
5 6 3.65 3.0

# create a new column, assign the common rank:
scores['rank'] = scores['score'].rank(method='min', ascending=False)
# result is:
id score rank
0 1 3.5 6.0
1 2 3.65 4.0
2 3 4.0 1.0
3 4 3.85 3.0
4 5 4.0 1.0
5 6 3.65 4.0

3.3 row to column conversion

3.3.1 MySQL

The idea is quite simple as illustrate below:

q3

First, we extract each column to form a new table corresponding to each store. Then, we concatenate (“Union”) three tables together. Note that when we are extracting the table, we should remove “null” rows.

Follow this idea, our implementation would be relatively easy:

1
2
3
4
5
select product_id, 'store1' as store, store1 as price from Products where store1 is not null
union all
select product_id, 'store2' as store, store2 as price from Products where store2 is not null
union all
select product_id, 'store3' as store, store3 as price from Products where store3 is not null

3.3.2 Pandas

In pandas, you can use the similar idea as listed in the above subsection. But Pandas provides more powerful tools, pd.melt, which helps us to convert wide (row) format into long (column) format:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
Input: 
Products table:
+------------+--------+--------+--------+
| product_id | store1 | store2 | store3 |
+------------+--------+--------+--------+
| 0 | 95 | 100 | 105 |
| 1 | 70 | null | 80 |
+------------+--------+--------+--------+
Output:
+------------+--------+-------+
| product_id | store | price |
+------------+--------+-------+
| 0 | store1 | 95 |
| 0 | store2 | 100 |
| 0 | store3 | 105 |
| 1 | store1 | 70 |
| 1 | store3 | 80 |
+------------+--------+-------+

The id_vars is the identifier column (will not be changed), value_vars is the columns to be converted. var_name is the new column name from converted columns.

1
2
3
4
5
6
df = products.melt(
id_vars='product_id',
value_vars=['store1', 'store2', 'store3'],
var_name='store',
value_name='price'
)

3.4 Conditional operator

The question requires us to set each value according to a predicate. Such basic requirement is well-supported by the MySQL and Pandas.

3.4.1 MySQL

In MySQL, if(cond, true_value, false_value) is a good helper. This should appear after SELECT.

1
2
3
4
5
SELECT 
employee_id,
IF(employee_id % 2 = 1 AND name NOT REGEXP '^M', salary, 0) AS bonus
FROM
employees

The second column, bonus, entry would be the same value of salary if the condition is satisfied, otherwise it is 0. For the condition, the second is the grammar of regular expression, asserting true only when name does not start with M. You may also use name NOT LIKE 'M%' for the second condition.

3.4.2 Pandas

In pandas, we can use the pd.DataFrame.apply function to achieve this. The apply function accepts a function, f(x), which x is the “current” row. Based on the information on that row, it returns a value. For example, in this question, the condition is “the employee is an odd number and the employee’s name does not start with the character 'M'. “

1
2
3
4
5
6
7
8
9
# judge function
def f(x):
if x['employee_id'] % 2 == 1 and not x['name'].startswith('M'):
return x['salary']
else:
return 0

# Apply it to the dataframe for the new column:
employees['bonus'] = employees.apply(f, axis=1)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# Sample input:
| employee_id | name | salary |
| ----------- | ------- | ------ |
| 2 | Meir | 3000 |
| 3 | Michael | 3800 |
| 7 | Addilyn | 7400 |
| 8 | Juan | 6100 |
| 9 | Kannon | 7700 |

# Sample result of employees['bonus']:
0 0
1 0
2 7400
3 0
4 7700
Name: bonus, dtype: int64

Which satisfies the requirement.

2. String Operations

2.1 MySQL

Filtering string in SQL is generally same as the process for filtering numeric. There are many built-in functions to help you achieve some string operation. For example, UPPER(), LOWER(), CONCAT(), SUBSTRING() .

  • length() function returns length of that columns’ entries:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
-- table is
+----+---------+
| id | content |
+----+---------+
| 1 | apple |
| 2 | banana |
| 3 | orange |
+----+---------+

-- Get len of each entry in "content" column
select length(content) from fruits;
+-----------------+
| length(content) |
+-----------------+
| 5 |
| 6 |
| 6 |
+-----------------+

-- Assign the length column to table, named "content_len":
select *, length(content) as content_len from fruits;
+----+---------+-------------+
| id | content | content_len |
+----+---------+-------------+
| 1 | apple | 5 |
| 2 | banana | 6 |
| 3 | orange | 6 |
+----+---------+-------------+
  • LIKE "pattern" tells whether a column match this pattern, so that you can filter others out. The pattern uses _ to match any ONE character, and % to match any MULTIPLE (>1) character.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
-- table is
+----+---------+
| id | content |
+----+---------+
| 1 | apple |
| 2 | banana |
| 3 | orange |
+----+---------+

-- Get a boolean column where "1" entry means content starts with "a"
+-------------------+
| content like "a%" |
+-------------------+
| 1 |
| 0 |
| 0 |
+-------------------+

-- Combine with "where" to do filtering
select * from fruits where content like "a%";
+----+---------+
| id | content |
+----+---------+
| 1 | apple |
+----+---------+
  • Sometimes, the pattern above cannot handle some complex queries. We can use regular expression here. The grammar is: <col_name> REGEXP 'pattern'. Note that this by default behaves like “match”, checking from the beginning.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
-- table is
+----+---------+
| id | content |
+----+---------+
| 1 | apple |
| 2 | banana |
| 3 | orange |
+----+---------+

-- Match all entries, begining with "a"
select * from fruits where content REGEXP "a\w*";
+----+---------+
| id | content |
+----+---------+
| 1 | apple |
+----+---------+

-- Match all entries, containing "an"
select * from fruits where content REGEXP "\w*an\w*";
+----+---------+
| id | content |
+----+---------+
| 2 | banana |
| 3 | orange |
+----+---------+

2.2 Pandas

For numeric data types, we usually directly use grammar like df['col'] > 5 to get the indexing series. For string operations, we have some similar ways to get the indexing series.

String Accessor. For each column, we can use .str to get its pandas.core.strings.accessor.StringMethods. This object has various of string operation functions, such as .replace(), .find(), .len(), .startswith()……

  • Some function, such as .len(), returns a new integer series, containing length of string for each entry:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# df is:
id content
0 1 apple
1 2 banana
2 3 orange

# Get len of each entry in "content" column
df['content'].str.len()
0 5
1 6
2 6
Name: content, dtype: int64

# Assign the len series to df, named "content_len":
df['content_len'] = df['content'].str.len()

# df now is:
id content content_len
0 1 apple 5
1 2 banana 6
2 3 orange 6
  • Some function, like .startwith(), returns a new boolean series, so you can do indexing:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# df is:
id content
0 1 apple
1 2 banana
2 3 orange

# Get indexing series where "content" entry starts with "a"
cond = df['content'].str.startswith('a')

# cond is:
0 True
1 False
2 False
Name: content, dtype: bool

# Do filtering
df = df.loc[cond, :]

# df is:
id content
0 1 apple
  • Regular Expression

Sometimes, we want to utilize Regex to help us match entries. We have .match() function. Note that match will exactly match from the beginning. If you just want to match any substring, use .contains instead.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# df is:
id content
0 1 apple
1 2 banana
2 3 orange

# Match all entries, begining with "a"
cond = df['content'].str.match(r'a[a-zA-Z]*')

# Result will be:
df.loc[cond, :]
id content
0 1 apple

# Match all entries, containing "an"
cond = df['content'].str.contains(r'an')

# Result will be:
df.loc[cond, :]
id content
1 2 banana
2 3 orange

Introduction

There are 30 questions to practice Python Pandas and MySQL on https://leetcode.cn/studyplan/30-days-of-pandas/ . These questions basically practice the fundamental skills. Pandas and MySQL are similar in some aspects, and one can write the same functional code in two languages. I’ll practice my skill with these questions, and write notes about the equivalent operations in two languages.

Accordingly, there are six sections in this section (so will use 6 articles in total):

  1. Condition Filter & Misc
  2. String Manipulation
  3. Data Manipulation
  4. Data Statistics
  5. Grouping
  6. Merging

1. Condition Filter & Join & Misc

The four related question in this area are:

Basic where clause filtering, with OR operator: https://leetcode.cn/problems/big-countries/

Basic where clause filtering, with AND operator: https://leetcode.cn/problems/recyclable-and-low-fat-products/

Joining two tables, with where clause filtering null: https://leetcode.cn/problems/customers-who-never-order/

Basic where clause filtering, with DISTINCT and sort: https://leetcode.cn/problems/article-views-i

1.1 Condition

Typical condition filter work is asking us to do like this:

Filter the big country in World scheme. A country is big if:

  • it has an area of at least three million (i.e., 3000000 km2), or
  • it has a population of at least twenty-five million (i.e., 25000000).

We’ll use this example to illustrate the point.

1.1.a MySQL

A typical condition filter in SQL looks like this:

1
2
3
4
5
select <col_names>
from <table_name>
where (
<condition predicate>
)

SQL allows the use of the logical connectives and, or, and not.

The operands of the logical connectives can be expressions involving the comparison operators <, <=, >, >=, =, and <> (Note that, equal symbol is one =, but not ==!)

For example, in the codes above, the answer is

1
2
3
SELECT name, population, area
FROM World
WHERE area >= 3000000 OR population >= 25000000

1.1.b Pandas

Filter operations in pandas is a little bit difference. We should know two basics things first:

  • Condition result in Index Series

If we want to get all data that are greater than 5 in df‘s col1 column, the code and result is as below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
print(df)
# OUTPUT:
'''
name col1 col2
0 a 1 10
1 b 2 9
2 c 3 8
3 d 4 7
4 e 5 6
5 f 6 5
6 g 7 4
7 h 8 3
8 i 9 2
9 j 10 1
'''

print(df['col1'] > 5)
# OUTPUT:
'''
0 False
1 False
2 False
3 False
4 False
5 True
6 True
7 True
8 True
9 True
Name: col1, dtype: bool
'''

As you can see, the result of df['col1'] > 5 is a bool Series with len(df) size. The entry is True only when the corresponding entry at col1 satisfies the condition.

  • Indexing by the Index Series

We can pass this bool index series in to the df.loc[]. Then, only the rows with the condition satisfied are displayed:

1
2
3
4
5
6
7
8
9
10
11
a = df.loc[df['col1'] > 5]
print(a)
# OUTPUT
'''
name col1 col2
5 f 6 5
6 g 7 4
7 h 8 3
8 i 9 2
9 j 10 1
'''

By such method, we can do the filtering in the pandas. Note that:

For the and, or and not logic operators, the corresponding operator in pandas is &, | and ~. For example:

1
2
3
4
5
6
cond1 = df['col1'] > 5
cond2 = df['col2'] > 2

b = df.loc[cond1 & cond2]

c = df.loc[ (df['col1'] > 5) & (df['col2'] > 2) ]

Note that the () symbols are necessary because of the operator precedence. For example, in the codes above, the answer is:

1
2
3
4
cond1 = world['area'] >= 300_0000
cond2 = world['population'] >= 2500_0000

return world.loc[cond1 | cond2, ['name', 'population', 'area']]

1.2 Misc

1.2.1 Rename output columns

1.2.1.a MySQL

In MySQL, rename a column is very easy. By default, if the command is

1
2
3
SELECT name, population, area
FROM World
WHERE area >= 3000000 OR population >= 25000000

Then the output columns are named name, population, and area. To rename, just use the as symbol to give a new name as output:

1
2
3
4
-- change name => Name, population => Population, area => Area
SELECT name as Name, population as Population, area as Area
FROM World
WHERE area >= 3000000 OR population >= 25000000

1.2.1.b Pandas

In pandas, we can use the pd.DataFrame.rename function to rename the columns. The input parameter columns is a dict[str, str], where key is the old name, and the value is the new name. For example in the example below, we make all names capitalized:

1
df = df.rename(columns={'name': 'Name', 'population': 'Population', 'area': 'Area'})

1.2.2 Swap output columns order

1.2.2.a MySQL

In SQL, this work is relatively easy. You only need to swap the column names in the select command:

1
2
3
4
5
-- show with A, B, C columns 
select A, B, C from table;

-- show with C, B, A columns
select C, B, A from table;

1.2.2.b Pandas

The method is relatively similar as the code in MySQL, by:

1
2
3
4
5
# show with A, B, C columns 
df.loc[:, ['A', 'B', 'C']]

# show with C, B, A columns
df.loc[:, ['C', 'B', 'A']]

1.2.3 Remove duplicate rows

1.2.3.a MySQL

It is quite easy to do that: you only need to add DISTINCT keyword. For example:

1
2
select DISTINCT student_id
from courses

1.2.3.b Pandas

The DataFrame has a method called drop_duplicates(). For example:

1
selected_views = selected_views.drop_duplicates()

More advanced usage is use parameter subset to tell which columns for identifying duplicates, by default use all of the columns. For example, the following code drop the duplicate rows whenever they have same value in column teacher_id, subject_id.

1
teacher = teacher.drop_duplicates(subset=['teacher_id', 'subject_id'])

1.2.4 Sort

1.2.4.a MySQL

In the select statement, add with order by <col> asc/desc statement. <col> indicates sort according to which column, asc means ascending sorting, desc means descending sorting.

1
2
3
select v.author_id 
from Views as v
order by id asc

To sort according multiple columns (if first column is the same, sort by second column), do it by:

1
2
3
select v.author_id 
from Views as v
order by id asc, col2 desc

1.2.4.b Pandas

Use the pd.DataFrame.sort_values() to sort according to which column(s). If we want to sort from biggest to smallest, use ascending parameter.

1
2
3
4
5
6
7
8
# sort by one column, id
selected_views = selected_views.sort_values(by=['id'])
selected_views = selected_views.sort_values(by=['id'], ascending=True)

# sort by two columns
# if first column is the same, sort by second column
selected_views = selected_views.sort_values(by=['id', 'view_date'])
selected_views = selected_views.sort_values(by=['id', 'view_date'], ascending=[True, False])

Finally, we have finished all contents we want to talk about. In this section, we’ll do a quick summary about what we have talked about and plan for the future of this series.

Summary

In our ten sections of tutorial, we are learning from low-level (tensors) to high-level (modules). In detail, the structure looks like this:

  • Tensor operations (Sec 1, 2)
  • Tensor-wise operations (Sec 3)
  • Module basics (Sec 4)
  • Implement by pure-python (Sec 5 ResNet)
  • Implement by CUDA (Sec 6, 7, 8, 9)

Conclusion

From our tutorial, we know that the model consists of nn.Modules. We implement the forward() function with many tensor-wise operations to do the forward pass.

The PyTorch is highly optimized. The Python side is enough for most cases. So, it is unnecessary to implement the algorithm in C++/CUDA. (Ref to sec 9. Our CUDA matrix multiplication operation is slower than the PyTorch’s). In addition, when we are writing in native Python, we don’t need to worry about the correctness of the gradient calculation.

But just in some rare cases, the forward() implementation is complicated, and they may contain for loop. The performance is low. Under such circumstances, you may consider to write the operator by yourself. But keep in mind that:

  • You need to check if the forward & backward propagations are correct;
  • You need to do benchmarks - does my operator really get faster?

Therefore, manually write a optimized CUDA operator is time consuming and complicated. In addition, one should be equipped with proficient CUDA knowledge. But once you write the good CUDA operators, your program will boost for many times. They are all about trade-off.

Announce in Advance

Finally, let’s talk about some things I will do in the future:

  • This series will not end. For this series article 11 and later: we’ll talk about some famous model implementations.
  • As I said above, writing CUDA operator needs proficient CUDA knowledge. So I’ll setup a new series to tell you how to write good CUDA programs: CUDA Medium Tutorials

In the section 6 to 9, we’ll investigate how to use torch.autograd.Function to implement the hand-written operators. The tentative outline is:

  • In the section 6, we talk about the basics of torch.autograd.Function. The operators defined by torch.autograd.Function can be automatically back-propagated.
  • In the last section (7), we’ll talk about mathematic derivation for the “linear layer” operator.
  • In the section (8), we talk about writing C++ CUDA extension for the “linear layer” operator.
  • In this section (9), we talk details about building the extension to a python module, as well as testing the module. Then we’ll conclude the things we’ve done so far.

Note:

  • This blog is written with following reference:
    • PyTorch official tutorial about CUDA extension: website.
    • YouTube video about writing CUDA extension: video, code.
  • For how to write CUDA code, you can follow official documentation, blogs (In Chinese). You can search by yourself for English tutorials and video tutorials.
  • This blog only talk some important points in the matrix multiplication example. Code are picked by pieces for illustration. Whole code is at: code.

Python-side Wrapper

Purely using C++ extension functions is not enough in our case. As mentioned in the Section 6, we need to build our operators with torch.autograd.Function. It is not convenient to let the user define the operator wrapper every time, so it’s better if we can write the wrapper in a python module. Then, users can easily import our python module, and using the wrapper class and functions in it.

cudaops-struct-improved.drawio

The python module is at mylinearops/. Follow the section 6, we define some autograd.Function operators and nn.Module modules in the mylinearops/mylinearops.py. Then, we export the operators and modules by the code in the mylinearops/__init__.py:

1
2
3
from .mylinearops import matmul
from .mylinearops import linearop
from .mylinearops import LinearLayer

As a result, when user imports the mylinearops, only the matmul (Y = XW) function, linearop (Y = XW+b) function and LinearLayer module are public to the users.

Writing setup.py and Building

setup.py script

The setup.py script is general same for all packages. Next time, you can just copy-paste the code above and modify some key components.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
import glob
import os.path as osp
from setuptools import setup
from torch.utils.cpp_extension import CUDAExtension, BuildExtension


ROOT_DIR = osp.dirname(osp.abspath(__file__))
include_dirs = [osp.join(ROOT_DIR, "include")]

SRC_DIR = osp.join(ROOT_DIR, "src")
sources = glob.glob(osp.join(SRC_DIR, '*.cpp'))+glob.glob(osp.join(SRC_DIR, '*.cu'))


setup(
name='mylinearops',
version='1.0',
author=...,
author_email=...,
description='Hand-written Linear ops for PyTorch',
long_description='Simple demo for writing Linear ops in CUDA extensions with PyTorch',
ext_modules=[
CUDAExtension(
name='mylinearops_cuda',
sources=sources,
include_dirs=include_dirs,
extra_compile_args={'cxx': ['-O2'],
'nvcc': ['-O2']}
)
],
py_modules=['mylinearops.mylinearops'],
cmdclass={
'build_ext': BuildExtension
}
)

At the beginning, we first get the path information. We get the include_dirs (Where we store our .h headers), sources (Where we store our C++/CUDA source code) directory.

Then, we call the setup function. The parameter explanation are as following:

  • name: The package name, how do users call this program
  • version: The version number, decided by the creator
  • author: The creator’s name
  • author_email: The creator’s email
  • description: The package’s description, short version
  • long_description: The package’s description, long version
  • ext_modules: Key in our building process. When we are building the PyTorch CUDA extension, we should use CUDAExtension, so that the build helper can know how to compile correctly
    • name: the CUDA extension name. We import this name in our wrapper to access the cuda functions
    • sources: the source files
    • include_dirs: the header files
    • extra_compile_args: The extra compiling flags. {'cxx': ['-O2'], nvcc': ['-O2']} is commonly used, which means using -O2 optimization level when compiling
  • py_modules: The Python modules needed for the package, which is our wrapper, mylinearops. In most cases, the wrapper module has the same name as the overall package name. ('mylinearops.mylinearops' stands for 'mylinearops/mylinearops.py')
  • cmdclass: When building the PyTorch CUDA extension, we always pass in this: {'build_ext': BuildExtension}

Building

Then, we can build the package. We first activate the conda environment where we want to install in:

1
conda activate <target_env>

Then run:

1
2
cd <proj_root>
python setup.py install

Note: Don’t run pip install ., otherwise your python module will not be successfully installed, at least in my case.

It may take some time to compile it. If the building process ends up with some error message, go and fix them. If it finally displays something as “successfully installed mylinearops”, then you are ready to go.

To check if the installation is successful, we can try to import it:

1
2
3
4
5
6
7
8
$ python
Python 3.9.15 (main, Nov 24 2022, 14:31:59)
[GCC 11.2.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import mylinearops
>>> dir(mylinearops)
['LinearLayer', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__', 'linearop', 'matmul', 'mylinearops']
>>>

Further testing will be mentioned in the next subsection.

Module Testing

We will test the forward and backward of matmul and LinearLayer calculations respectively. To verify the answer, we’ll compare our answer with the PyTorch’s implementation or with torch.autograd.gradcheck. To increase the accuracy, we recommend to use double (torch.float64) type instead of float (torch.float32).

For tensors: create with argument dtype=torch.float64.

For modules: a good way is to use model.double() to convert all the parameters and buffers to double.

forward

A typical method is to use torch.allclose to verify if two tensors are close to each other. We can create the reference answer by PyTorch’s implementation.

  • matmul:
1
2
3
4
5
6
7
8
9
10
import torch
import mylinearops

A = torch.randn(20, 30, dtype=torch.float64).cuda().requires_grad_()
B = torch.randn(30, 40, dtype=torch.float64).cuda().requires_grad_()

res_my = mylinearops.matmul(A, B)
res_torch = torch.matmul(A, B)

print(torch.allclose(res_my, res_torch))
  • LinearLayer:
1
2
3
4
5
6
7
8
9
10
11
import torch
import mylinearops

A = torch.randn(40, 30, dtype=torch.float64).cuda().requires_grad_() * 100
linear = mylinearops.LinearLayer(30, 50).cuda().double()

res_my = linear(A)
res_torch = torch.matmul(A, linear.weight) + linear.bias

print(torch.allclose(res_my, res_torch))
print(torch.max(torch.abs(res_my - res_torch)))

It is worthwhile that sometimes, because of the floating number error, the answer from PyTorch is not consistent with the answer from our implementations. We have three methods:

  1. Pass atol=1e-5, rtol=1e-5 into the torch.allclose to increase the tolerance level.
  2. [Not very recommended] We can observe the absolute error by torch.max(torch.abs(res_my - res_torch)) for reference. If the result is merely 0.01 ~ 0.1, That would be OK in most cases.

backward

For backward calculation, we can use torch.autograd.gradcheck to verify the result. If some tensors are only float, an warning will occur:

……/torch/autograd/gradcheck.py:647: UserWarning: Input #0 requires gradient and is not a double precision floating point or complex. This check will likely fail if all the inputs are not of double precision floating point or complex.

So it is recommended to use the double type. Otherwise the check will likely fail.

  • matmul:

As mentioned above, for pure calculation functions, we can assign all tensor as double (torch.float64) type. We are ready to go:

1
2
3
4
5
6
7
import torch
import mylinearops

A = torch.randn(20, 30, dtype=torch.float64).cuda().requires_grad_()
B = torch.randn(30, 40, dtype=torch.float64).cuda().requires_grad_()

print(torch.autograd.gradcheck(mylinearops.matmul, (A, B))) # pass
  • LinearLayer:

As mentioned above, we can use model.double(). We are ready to go:

1
2
3
4
5
6
7
8
9
10
11
12
import torch
import mylinearops

## CHECK for Linear Layer with bias ##
A = torch.randn(40, 30, dtype=torch.float64).cuda().requires_grad_()
linear = mylinearops.LinearLayer(30, 40).cuda().double()
print(torch.autograd.gradcheck(linear, (A,))) # pass

## CHECK for Linear Layer without bias ##
A = torch.randn(40, 30, dtype=torch.float64).cuda().requires_grad_()
linear_nobias = mylinearops.LinearLayer(30, 40, bias=False).cuda().double()
print(torch.autograd.gradcheck(linear_nobias, (A,))) # pass

Full Example

Now, we use our linear module to build a three layer classic linear model [784, 256, 10]to classify the MNIST digits. See the examples/main.py file.

Just as the nn.Linear, we create the model by:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
class MLP(nn.Module):
def __init__(self):
super(MLP, self).__init__()
self.linear1 = mylinearops.LinearLayer(784, 256, bias=True)#.cuda()
self.linear2 = mylinearops.LinearLayer(256, 256, bias=True)#.cuda()
self.linear3 = mylinearops.LinearLayer(256, 10, bias=True)#.cuda()
self.relu = nn.ReLU()
# self.softmax = nn.Softmax(dim=1)

def forward(self, x):
x = x.view(-1, 784)
x = self.relu(self.linear1(x))
x = self.relu(self.linear2(x))
# x = self.softmax(self.linear3(x))
x = self.linear3(x)
return x

After writing some basic things, we can run our model: python examples/tests.py.

We also build the model by PyTorch’s nn.Linear. The result logging is:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# mylinearops
...
Epoch: [10/10], Step: [100/468], Loss: 0.0417, Acc: 0.9844
Epoch: [10/10], Step: [200/468], Loss: 0.0971, Acc: 0.9609
Epoch: [10/10], Step: [300/468], Loss: 0.0759, Acc: 0.9766
Epoch: [10/10], Step: [400/468], Loss: 0.0777, Acc: 0.9766
Time: 23.4661s

# torch
...
Epoch: [10/10], Step: [100/468], Loss: 0.1048, Acc: 0.9688
Epoch: [10/10], Step: [200/468], Loss: 0.0412, Acc: 0.9844
Epoch: [10/10], Step: [300/468], Loss: 0.0566, Acc: 0.9688
Epoch: [10/10], Step: [400/468], Loss: 0.0217, Acc: 0.9922
Time: 26.5896s

It is surprising that our implementation is even faster than the torch’s one. (But relax, after trying for some repetitions, we find ours is just as fast as the torch’s one). This is because the data scale is relatively small, the computation proportion is small. When the data scale is larger, ours may be slower than torch’s.

In the section 6 to 9, we’ll investigate how to use torch.autograd.Function to implement the hand-written operators. The tentative outline is:

  • In the section 6, we talk about the basics of torch.autograd.Function. The operators defined by torch.autograd.Function can be automatically back-propagated.
  • In the last section (7), we’ll talk about mathematic derivation for the “linear layer” operator.
  • In this section (8), we talk about writing C++ CUDA extension for the “linear layer” operator.
  • In the section 9, we talk details about building the extension to a module, as well as testing. Then we’ll conclude the things we’ve done so far.

Note:

  • This blog is written with following reference:
    • PyTorch official tutorial about CUDA extension: website.
    • YouTube video about writing CUDA extension: video, code.
  • For how to write CUDA code, you can follow official documentation, blogs (In Chinese). You can search by yourself for English tutorials and video tutorials.
  • This blog only talk some important points in the matrix multiplication example. Code are picked by pieces for illustration. Whole code is at: code.

Overall Structure

The general structure for our PyTorch C++ / CUDA extension looks like following:

cudaops-struct

We mainly have three kinds of file: Library interface, Core code on CPU, and Core code on GPU. Let’s explain them in detail:

  • Library interface (.cpp)

    • Contains Functions Interface for Python to call. These functions usually have Tensor input and Tensor return value.
    • Contains a standard pybind declaration, since our extension uses pybind to bind the C++ functions for Python. It indicates which functions are needed to be bound.
  • Core code on CPU (.cpp)

    • Contains core function to do the calculation.
    • Contains wrapper for the core function, serves to creating the result tensor, checking the input shape, etc.
  • Core code on GPU (.cu)

    • Contains CUDA kernel function __global__ to do the parallel calculation.
    • Contains wrapper for the core function, serves to creating the result tensor, checking the input shape, setting the launch configs, launching the kernel, etc.

Then, after we finishing the code, we can use Python build tools to compile the code into a static object library (.so file). Then, we can import them normally in the Python side. We can call the functions we declared in library interface by pybind11.

In our example code, we don’t provide code for CPU calculation. We only support GPU. So we only have two files (src/linearops.cpp and src/addmul_kernel.cu)

Pybind Interface

This is the src/linearops.cpp file in our repo.

1. Utils function

We usually defines some utility macro functions in our code. They are in the include/utils.h header file.

1
2
3
4
5
6
7
// PyTorch CUDA Utils
#define CHECK_CUDA(x) TORCH_CHECK(x.is_cuda(), #x " must be a CUDA tensor")
#define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x " must be contiguous")
#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x)

// Kernel Config Utils
#define DIV_CEIL(a, b) (((a) + (b) - 1) / (b))

The third macro will call first two macros, which are used to make sure the tenser is on the CUDA devices and is contiguous.

The last macro performs ceil division, which are often used in setting the CUDA kernel launch configurations.

2. Interface functions

Benefited by pybind, we can simply define functions in C++ and use them in Python. A function looks like

1
2
3
4
5
torch::Tensor func(torch::Tensor a, torch::Tensor b, int c){
torch::Tensor res;
......
return res;
}

is relatively same as the Python function below.

1
2
3
4
def func(a: torch.Tensor, b: torch.Tensor, c: int) -> torch.Tensor
res = ... # torch.Tensor
......
return res

Then, we can define our matrix multiplication interface as below. Note that we need to implement both the forward and backward functions!

  • forward

Check the input, input size, and then call the CUDA function wrapper.

1
2
3
4
5
6
7
8
9
10
11
torch::Tensor matmul_forward(
const torch::Tensor &A,
const torch::Tensor &B)
{
CHECK_INPUT(A);
CHECK_INPUT(B);

TORCH_CHECK(A.size(1) == B.size(0), "matmul_fast_forward: shape mismatch");

return matmul_cuda(A, B);
}
  • backward

Also check the input, input size, and then call the CUDA function wrapper. Note that we calculate the backward of A * B = C for input matrix A, B in two different function. So that when someday we don’t need to calculate the gradient of A, we can just pass it.

The gradient function derivation is mentioned in last section here.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
/* Backward for A gradient */
torch::Tensor matmul_dA_backward(
const torch::Tensor &grad_output,
const torch::Tensor &A,
const torch::Tensor &B)
{
CHECK_INPUT(grad_output);
CHECK_INPUT(B);

// dL/dB = dL/dY * B^T
auto grad_A = matmul_cuda(grad_output, transpose_cuda(B));

return grad_A;
}

/* Backward for B gradient */
torch::Tensor matmul_dB_backward(
const torch::Tensor &grad_output,
const torch::Tensor &A,
const torch::Tensor &B)
{
CHECK_INPUT(grad_output);
CHECK_INPUT(A);

// dL/dB = A^T * dL/dY
auto grad_B = matmul_cuda(transpose_cuda(A), grad_output);

return grad_B;
}

3. Binding

At the last of the src/linearops.cpp, we use the following code to bind the functions. The first string is the function name in Python side, the second is a function pointer to the function be called, and the last is the docstring for that function in Python side.

1
2
3
4
5
6
7
PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
......
m.def("matmul_forward", &matmul_forward, "Matmul forward");
m.def("matmul_dA_backward", &matmul_dA_backward, "Matmul dA backward");
m.def("matmul_dB_backward", &matmul_dB_backward, "Matmul dB backward");
......
}

CUDA wrapper

This is the src/addmul_kernel.cu file in our repo.

The wrapper for matrix multiplication looks like below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
torch::Tensor matmul_cuda(torch::Tensor A, torch::Tensor B) {
// 1. Get metadata
const int m = A.size(0);
const int n = A.size(1);
const int p = B.size(1);

// 2. Create output tensor
auto result = torch::empty({m, p}, A.options());

// 3. Set launch configuration
const dim3 blockSize = dim3(BLOCK_SIZE, BLOCK_SIZE);
const dim3 gridSize = dim3(DIV_CEIL(m, BLOCK_SIZE), DIV_CEIL(p, BLOCK_SIZE));

// 4. Call the cuda kernel launcher
AT_DISPATCH_FLOATING_TYPES(A.type(), "matmul_cuda",
([&] {
matmul_fw_kernel<scalar_t><<<gridSize, blockSize>>>(
A.packed_accessor<scalar_t, 2, torch::RestrictPtrTraits, size_t>(),
B.packed_accessor<scalar_t, 2, torch::RestrictPtrTraits, size_t>(),
result.packed_accessor<scalar_t, 2, torch::RestrictPtrTraits, size_t>(),
m, p
);
}));

// 5. Return the value
return result;
}

And here, we’ll talk in details:

1. Get metadata

Just as the tensor in PyTorch, we can use Tensor.size(0) to axis the shape size of dimension 0.

Note that we have checked the dimension match at the interface side, we don’t need to check it here.

2. Create output tensor

We can do operation in-place or create a new tensor for output. Use the following code to create a tensor shape m x p, with same dtype / device as A.

1
auto result = torch::empty({m, p}, A.options());

In other situations, when we want special dtype / device, we can follow the declaration as below:

1
torch::empty({m, p}, torch::dtype(torch::kInt32).device(feats.device()))

torch.empty only allocate the memory, but not initialize the entries to 0. Because sometimes, we’ll fill into the result tensors in the kernel functions, so it is not necessary to initialize as 0.

3. Set launch configuration

You should know some basic CUDA knowledges before understand this part. Basically here, we are setting the launch configuration based on the input matrix size. We are using the macro functions defined before.

1
2
const dim3 blockSize = dim3(BLOCK_SIZE, BLOCK_SIZE);
const dim3 gridSize = dim3(DIV_CEIL(m, BLOCK_SIZE), DIV_CEIL(p, BLOCK_SIZE));

We set each thread block size to 16 x 16. Then, we set the number of blocks according to the input size.

4. Call the cuda kernel launcher

Unlike normal cuda programs, we use ATen‘s function to start the kernel. This is a standard operation, and you can copy-paste it to anywhere.

1
2
3
4
5
6
7
8
9
AT_DISPATCH_FLOATING_TYPES(A.type(), "matmul_cuda", 
([&] {
matmul_fw_kernel<scalar_t><<<gridSize, blockSize>>>(
A.packed_accessor<scalar_t, 2, torch::RestrictPtrTraits, size_t>(),
B.packed_accessor<scalar_t, 2, torch::RestrictPtrTraits, size_t>(),
result.packed_accessor<scalar_t, 2, torch::RestrictPtrTraits, size_t>(),
m, p
);
}));
  • This function is named AT_DISPATCH_FLOATING_TYPES, meaning the inside kernel will support floating types, i.e., float (32bit) and double (64bit). For float16, you can use AT_DISPATCH_ALL_TYPES_AND_HALF. For int (int (32bit) and long long (64 bit) and more, use AT_DISPATCH_INTEGRAL_TYPES.

  • The first argument A.type(), indicates the actual chosen type in the runtime.

  • The second argument matmul_cuda can be used for error reporting.

  • The last argument, which is a lambda function, is the actual function to be called. Basically in this function, we start the kernel by the following statement:

    1
    2
    3
    4
    5
    6
    matmul_fw_kernel<scalar_t><<<gridSize, blockSize>>>(
    A.packed_accessor<scalar_t, 2, torch::RestrictPtrTraits, size_t>(),
    B.packed_accessor<scalar_t, 2, torch::RestrictPtrTraits, size_t>(),
    result.packed_accessor<scalar_t, 2, torch::RestrictPtrTraits, size_t>(),
    m, p
    );
    • matmul_fw_kernel is the kernel function name.
    • <scalar_t> is the template parameter, will be replaced to all possible types in the outside AT_DISPATCH_FLOATING_TYPES.
    • <<<gridSize, blockSize>>> passed in the launch configuration
    • In the parameter list, if that is a Tensor, we should pass in the packed accessor, which convenient indexing operation in the kernel.
      • <scalar_t> is the template parameter.
      • 2 means the Tensor.ndimension=2.
      • torch::RestrictPtrTraits means the pointer (tensor memory) would not not overlap. It enables some optimization. Usually not change.
      • size_t indicates the index type. Usually not change.
    • if the parameter is integer m, p, just pass it in as normal.

5. Return the value

If we have more then one return value, we can set the return type to std::vector<torch::Tensor>. Then we return with {xxx, yyy}.

CUDA kernel

This is the src/addmul_kernel.cu file in our repo.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
template <typename scalar_t>
__global__ void matmul_fw_kernel(
const torch::PackedTensorAccessor<scalar_t, 2, torch::RestrictPtrTraits, size_t> A,
const torch::PackedTensorAccessor<scalar_t, 2, torch::RestrictPtrTraits, size_t> B,
torch::PackedTensorAccessor<scalar_t, 2, torch::RestrictPtrTraits, size_t> result,
const int m, const int p
)
{
const int row = blockIdx.x * blockDim.x + threadIdx.x;
const int col = blockIdx.y * blockDim.y + threadIdx.y;

if (row >= m || col >= p) return;

scalar_t sum = 0;
for (int i = 0; i < A.size(1); i++) {
sum += A[row][i] * B[i][col];
}
result[row][col] = sum;
}
  • We define it as a template function template <typename scalar_t>, so that our kernel function can support different type of input tensor.
  • Usually we’ll set the input PackedTensorAccessor with const, to avoid some unexpected modification on them.
  • The main code is just a simple CUDA matrix multiplication example. This is very common, you can search online for explanation.

Ending

That’s too much things in this section. In the next section, we’ll talk about how to write the setup.py to compile the code, letting it be a module for python.

In the section 6 to 9, we’ll investigate how to use torch.autograd.Function to implement the hand-written operators. The tentative outline is:

  • In the last section (6), we talk about the basics of torch.autograd.Function. The operators defined by torch.autograd.Function can be automatically back-propagated.
  • In this section (7), we’ll talk about mathematic derivation for the “linear layer” operator.
  • In the section 8, we talk about writing C++ CUDA extension for the “linear layer” operator.
  • In the section 9, we talk details about building the extension to a module, as well as testing. Then we’ll conclude the things we’ve done so far.

The linear layer is defined by Y = XW + b. There is a matrix multiplication operation, and a bias addition. We’ll talk about their forward/backward derivation separately.

(I feel sorry that currently there is some problem with displaying mathematics formula here. I’ll use screenshot first.)

Matrix multiplication: forward

The matrix multiplication operation is a common operator. Each entry in the result matrix is a vector dot product of two input matrixes. The (i, j) entry of the result is from multiplying first matrix’s row i vector and the second matrix’s column j vector. From this property, we know that number of columns in the first matrix should equal to number of rows in the second matrix. The shape should be: [m, n] x [n, r] -> [m, r]. For more details, see the figure illustration below.

matmul-forward

Matrix multiplication: backward

First, we should know what’s the goal of the backward propagation. In the upstream side, we would get the gradient of the answer matrix, C. (The gradient matrix has the same size as its corresponding matrix. i.e., if C is in shape [m, r], then gradient of C is shape [m, r] as well.) In this step, we should get the gradient of matrix A and B. Gradient of matrix A and B are functions in terms of matrix A and B and gradient of C. Specially, by chain rule, we can formulate it as

matmul-backward-math1

To figure out the gradient of A, we should first investigate how an entry A[i, j] contribute to the entries in the result matrix C. See the figure below:

matmul-backward

As shown above, entry A[i, j] multiplies with entries in row j of matrix B, contributing to the entries in row i of matrix C. We can write the gradient down in mathematics formula below:

matmul-backward-math2

The result above is the gradient for one entry A[i, j], and it’s a vector dot product between a matrix’s row i and another matrix’s column j. Observing this formula, we can naturally extend it to the gradient of the whole matrix A, and that will be a matrix product.

matmul-backward-math3

Recall “Gradient of matrix A and B are functions in terms of matrix A and B and gradient of C” we said before. Our derivation indeed show that, uh?

Add bias: forward

First, we should note that when doing the addition, we’re actually adding the XW matrix (shape [n, r]) with the bias vector (shape [r]). Indeed we have a broadcasting here. We add bias to each row of the XW matrix.

addbias-forward.drawio

Add bias: backward

With the similar principle, we can get the gradient for the bias as well.

addbias-backward

For each entry in vector b, the gradient is:

addbias-backward-math1

That is, the gradient of entry b_i is the summation of the i-th column. In total, the gradient will be the summation along each column (i.e., axis=0). In programming, we write:

1
grad_b = torch.sum(grad_C, axis=0)

PyTorch Verification

Finally, we can write a PyTorch program to verify if our derivation is correct: we will compare our calculated gradients with the gradients calculated by the PyTorch. If they are same, our derivation would be correct.

1
2
3
4
5
6
7
8
9
10
import torch
A = torch.randn(10, 20).requires_grad_()
B = torch.randn(20, 30).requires_grad_()

res = torch.mm(A, B)
res.retain_grad()
res.sum().backward()

print(torch.allclose(A.grad, torch.mm(res.grad, B.t()))) # grad_A = grad_res * B^T
print(torch.allclose(B.grad, torch.mm(A.t(), res.grad))) # grad_B = A^T * grad_res

Finally, the output is:

1
2
True
True

Which means that our derivation is correct.

In this section (and also three sections in the future), we investigate how to use torch.autograd.Function to implement the hand-written operators. The tentative outline is:

  • This section (6), we talk about the basics of torch.autograd.Function.
  • In the next section (7), we’ll talk about mathematic derivation for the “linear layer” operator.
  • In the section 8, we talk about writing C++ CUDA extension for the “linear layer” operator.
  • In the section 9, we talk details about building the extension to a module, as well as testing. Then we’ll conclude the things we’ve done so far.

Backgrounds

This article mainly takes reference of the Official tutorial and summarizes, explains the important points.

By defining an operator with torch.autograd.Function and implement its forward / backward function, we can use this operator with other PyTorch built-in operators together. The operators defined by torch.autograd.Function can be automatically back-propagated.

As mentioned in the tutorial, we should use the torch.autograd.Function in the following scenes:

  • The computation is from other libraries, so they don’t support differential natively. We should explicitly define its backward functions.
  • The PyTorch’s implementation of an operator cannot take benefits from the parallelization. We utilize the PyTorch C++/CUDA extension for the better performance.

Basic Structure

The following is the basic structure of the Function:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
import torch
from torch.autograd import Function

class LinearFunction(Function):

@staticmethod
def forward(ctx, input0, input1, ... , inputN):
# Save the input for the backward use.
ctx.save_for_backward(input1, input1, ... , inputN)
# Calculate the output0, ... outputM given the inputs.
......
return output0, ... , outputM

@staticmethod
def backward(ctx, grad_output0, ... , grad_outputM):
# Get and unpack the input for the backward use.
input0, input1, ... , inputN = ctx.saved_tensors

grad_input0 = grad_input1 = grad_inputN = None
# These needs_input_grad records whether each input need to calculate the gradient. This can improve the efficiency.
if ctx.needs_input_grad[0]:
grad_input0 = ... # backward calculation
if ctx.needs_input_grad[1]:
grad_input1 = ... # backward calculation
......

return grad_input0, grad_input1, grad_inputN
  1. The forward and backward functions are staticmethod. The forward function is o0, ..., oM = forward(i0, ..., iN), calculate the output0 ~ outputM by the input0 ~ inputN. Then the backward function is g_i0, ..., g_iN = backward(g_o0, ..., g_M), calculate the gradient of input0 ~ gradient of inputM by the gradient of output0 ~ outputN.

  2. Since forward and backward are merely functions. We need store the input tensors to the ctx in the forward pass, so that we can get them in the backward functions. See here to use the alternative way to define Function.

  3. ctx.needs_input_grad is a tuple of Booleans. It records whether one input needs to calculate the gradient or not. Therefore, we can save computation resources if one tensor doesn’t need gradients. In that case, the return value of backward function for that tensor is None.

Use it

Pure functions

After defining the class, we can use the .apply method to use it. Simply

1
2
# Option 1: alias
linear = LinearFunction.apply

or,

1
2
3
# Option 2: wrap in a function, to support default args and keyword args.
def linear(input, weight, bias=None):
return LinearFunction.apply(input, weight, bias)

Then call as

1
output = linear(input, weight, bias) # input, weight, bias are all tensors!

nn.Module

In most cases, the weight and bias are parameters that are trainable during the process. We can further wrap this linear function to a Linear module:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
class Linear(nn.Module):
def __init__(self, input_features, output_features, bias=True):
super().__init__()
self.input_features = input_features
self.output_features = output_features

# nn.Parameters require gradients by default.
self.weight = nn.Parameter(torch.empty(output_features, input_features))
if bias:
self.bias = nn.Parameter(torch.empty(output_features))
else:
# You should always register all possible parameters, but the
# optional ones can be None if you want.
self.register_parameter('bias', None)

# Not a very smart way to initialize weights
nn.init.uniform_(self.weight, -0.1, 0.1)
if self.bias is not None:
nn.init.uniform_(self.bias, -0.1, 0.1)

def forward(self, input):
# See the autograd section for explanation of what happens here.
return LinearFunction.apply(input, self.weight, self.bias)

def extra_repr(self):
# (Optional)Set the extra information about this module. You can test
# it by printing an object of this class.
return 'input_features={}, output_features={}, bias={}'.format(
self.input_features, self.output_features, self.bias is not None
)

As mentioned in section 3, 4 of this series, the weight and bias should be nn.Parameter so that they can be recognized correctly. Then we initialize the weights with random variables.

In the forward functions, we use the defined LinearFunction.apply functions. The backward process will be automatically done just as other PyTorch modules.

Problem

Today when I was running PyTorch scripts, I met a strange problem:

1
2
3
a = torch.rand(2, 2).to('cuda:1')
......
torch.cuda.synchronize()

but result in the following error:

1
2
3
4
5
  File "....../test.py", line 67, in <module>
torch.cuda.synchronize()
File "....../miniconda3/envs/py39/lib/python3.9/site-packages/torch/cuda/__init__.py", line 495, in synchronize
return torch._C._cuda_synchronize()
RuntimeError: CUDA error: out of memory

but It’s clear that GPU1 has enough memory (we only need to allocate 16 bytes!):

1
2
3
4
5
6
7
8
9
|===============================+======================+======================|
| 0 NVIDIA GeForce ... Off | 00000000:1A:00.0 Off | N/A |
| 75% 73C P2 303W / 350W | 24222MiB / 24268MiB | 64% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 NVIDIA GeForce ... Off | 00000000:1B:00.0 Off | N/A |
| 90% 80C P2 328W / 350W | 15838MiB / 24268MiB | 92% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+

And normally, when we fail to allocate the memory for tensors, the error is:

1
CUDA out of memory. Tried to allocate 16.00 MiB (GPU 0; 6.00 GiB total capacity; 4.54 GiB already allocated; 14.94 MiB free; 4.64 GiB reserved in total by PyTorch)

But our error message is much “simpler”. So what happened?

Possible Answer

This confused me for some time. According to this website:

When you initially do a CUDA call, it’ll create a cuda context and a THC context on the primary GPU (GPU0), and for that i think it needs 200 MB or so. That’s right at the edge of how much memory you have left.

Surprisingly, in my case, GPU0 has occupied 24222MiB / 24268MiB memory. So there is no more memory for the context. In addition, this makes sense that out error message is RuntimeError: CUDA error: out of memory, not the message that tensallocation failed.

Possible Solution

Set the CUDA_VISIBLE_DEVICES environment variable. We need to change primary GPU (GPU0) to other one.

Method 1

In the starting python file:

1
2
3
# Do this before `import torch`
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '1' # set to what you like, e.g., '1,2,3,4,5,6,7'

Method 2

In the shell:

1
2
# Do this before run python
export CUDA_VISIBLE_DEVICES=1 # set to what you like, e.g., '1,2,3,4,5,6,7'

And then, our program is ready to go.

In this section, we’ll utilize knowledge we learnt from the last section (see here), to implement a ResNet Network (paper).

Note that we follow the original paper’s work. Our implementation is a simper version of the official torchvision implementation. (That is, we only implement the key structure, and the random weight init. We don’t consider dilation or other things).

Preliminaries: Calculate the feature map size

  • Basic formula

Given a convolution kernel with size K, and the padding P, the stride S, feature map size I, we can calculate the output size as O = ( I - K + 2P ) / S + 1.

  • Corollary

Based on the formula above, we know that when S=1:

  1. K=3, P=1 makes the input size and output size same.
  2. K=1, P=0 makes the input size and output size same.

Overall Structure

The Table 1 in the original paper illustrates the overall structure of the ResNet:

resnet_table1

We know that from conv2, each layer consists of many blocks. And the blocks in 18, 34 layers is different from blocks in 50, 101, 152 layers.

We have several deductions:

  1. When the feature map enters the next layer, the first block need to do a down sampling operation. This is done by setting the one of the convolution kernel’s stride=2.
  2. At other convolution kernels, the feature map’s size is same. So the convolution settings is same as the one referred in Preliminaries.

Basic Block Implementation

The basic block’s structure looks like this:

basic

Please see the code below. Here, apart from channels defining the channels in the block, we have three additional parameters, in_channels, stride, and downsample to make this block versatile in the FIRST block in each layer.

According to the ResNet structure, for example, the first block in layer3 has the input 64*56*56. The first block in layer3 has two tasks:

  1. Make the feature map size to 28*28. Thus we need to set its stride to 2.
  2. Make the number of channels from 64 to 128. Thus the in_channel should be 64.
  3. In addition, since the input is 64*56*56, while the output is 128*28*28, we need a down sample convolution to match the shortcut input to the output size.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
import torch
import torch.nn as nn
class ResidualBasicBlock(nn.Module):
expansion: int = 1
def __init__(self, in_channels: int, channels: int, stride: int = 1, downsample: nn.Module = None):
super().__init__()
self.downsample = downsample
self.conv1 = nn.Conv2d(in_channels, channels, 3, stride, 1)
self.batchnorm1 = nn.BatchNorm2d(channels)
self.relu1 = nn.ReLU()
self.conv2 = nn.Conv2d(channels, channels, 3, 1, 1)
self.batchnorm2 = nn.BatchNorm2d(channels)
self.relu2 = nn.ReLU()

def forward(self, x):
residual = x
x = self.conv1(x)
x = self.batchnorm1(x)
x = self.relu1(x)
x = self.conv2(x)
x = self.batchnorm2(x)
if self.downsample:
residual = self.downsample(residual)
x += residual
x = self.relu2(x)
return x

Bottleneck Block Implementation

The bottleneck block’s structure looks like this:

bottleneck

To reduce the computation cost, the Bottleneck block use 1x1 kernel to map the high number of channels (e.g., 256) to a low one (e.g., 64), and do the 3x3 convolution. Then, it maps the 64 channels to 256 again.

Please see the code below. Same as the basic block, We have three additional parameters, in_channels, stride, and downsample to make this block versatile in the FIRST block in each layer. The reasons are same as above.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
class ResidualBottleNeck(nn.Module):
expansion: int = 4
def __init__(self, in_channels: int, channels: int, stride: int = 1, downsample: nn.Module = None):
super().__init__()
self.downsample = downsample
self.conv1 = nn.Conv2d(in_channels, channels, 1, 1)
self.batchnorm1 = nn.BatchNorm2d(channels)
self.relu1 = nn.ReLU()
self.conv2 = nn.Conv2d(channels, channels, 3, stride, 1)
self.batchnorm2 = nn.BatchNorm2d(channels)
self.relu2 = nn.ReLU()
self.conv3 = nn.Conv2d(channels, channels*4, 1, 1)
self.batchnorm3 = nn.BatchNorm2d(channels*4)
self.relu3 = nn.ReLU()

def forward(self, x):
residual = x
x = self.conv1(x)
x = self.batchnorm1(x)
x = self.relu1(x)

x = self.conv2(x)
x = self.batchnorm2(x)
x = self.relu2(x)

x = self.conv3(x)
x = self.batchnorm3(x)

if self.downsample:
residual = self.downsample(residual)

x += residual
x = self.relu3(x)
return x

ResNet Base Implementation

Then we can put thing together to form the ResNet model! The whole structure is straight-forward. We define the submodules one by one, and implement the forward() function.

There is only two tricky point:

  1. To support the ResNetBase for two different base blocks, the base block can be passed to this initializer. Since two base blocks have slightly differences in setting the channels, ResidualBasicBlock and ResidualBottleNeck have an attribute called expansion, which convenient the procedure in setting the correct number of channels and outputs.
  2. See the _make_layer function below. It need to determine whether we need to do the down sample. And the condition and explanation is described below.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
class ResNetBase(nn.Module):
def __init__(self, block, layer_blocks: list, input_channels=3):
super().__init__()
self.block = block
# conv1: 7x7
self.conv1 = nn.Sequential(
nn.Conv2d(input_channels, 64, 3, 2, 1),
nn.BatchNorm2d(64),
nn.ReLU()
)
# max pool
self.maxpool = nn.MaxPool2d(3, 2, 1)
# conv2 ~ conv5_x
self.in_channels = 64
self.conv2 = self._make_layer(64, layer_blocks[0])
self.conv3 = self._make_layer(128, layer_blocks[1], 2)
self.conv4 = self._make_layer(256, layer_blocks[2], 2)
self.conv5 = self._make_layer(512, layer_blocks[3], 2)

self.downsample = nn.AvgPool2d(7)
output_numel = 512 * self.block.expansion
self.fc = nn.Linear(output_numel, 1000)

# init the weights
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode="fan_out", nonlinearity="relu")
elif isinstance(m, nn.Linear):
nn.init.normal_(m.weight, 0, 0.01)
nn.init.constant_(m.bias, 0)
elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)

def _make_layer(self, channel, replicates, stride=1):
modules = []

downsample = None
if stride != 1 or self.in_channels != channel*self.block.expansion:
# Use downsample to match the dimension in two cases:
# 1. stride != 1, meaning we should downsample H, W in this layer.
# Then we need to match the residual's H, W and the output's H, W of this layer.
# 2. self.in_channels != channel*block.expansion, meaning we should increase C in this layer.
# Then we need to match the residual's C and the output's C of this layer.

downsample = nn.Sequential(
nn.Conv2d(self.in_channels, channel*self.block.expansion, 1, stride),
nn.BatchNorm2d(channel*self.block.expansion)
)

modules.append(self.block(self.in_channels, channel, stride, downsample))

self.in_channels = channel * self.block.expansion
for r in range(1, replicates):
modules.append(self.block(self.in_channels, channel))
return nn.Sequential(*modules)

def forward(self, x):
# x: shape Bx3x224x224
x = self.conv1(x)
x = self.maxpool(x)

x = self.conv2(x)
x = self.conv3(x)
x = self.conv4(x)
x = self.conv5(x)

x = self.downsample(x)
x = torch.flatten(x, start_dim=1)
x = self.fc(x)

return x

Encapsulate the Constructors

Finally, we can encapsulate the constructors by functions:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
def my_resnet18(in_channels=3):
return ResNetBase(ResidualBasicBlock, [2, 2, 2, 2], in_channels)

def my_resnet34(in_channels=3):
return ResNetBase(ResidualBasicBlock, [3, 4, 6, 3], in_channels)

def my_resnet50(in_channels=3):
return ResNetBase(ResidualBottleNeck, [3, 4, 6, 3], in_channels)

def my_resnet101(in_channels=3):
return ResNetBase(ResidualBottleNeck, [3, 4, 23, 3], in_channels)

def my_resnet152(in_channels=3):
return ResNetBase(ResidualBottleNeck, [3, 8, 36, 3], in_channels)

Then, we can use it as normal models:

1
2
3
img = torch.randn(1, 3, 224, 224)
model_my = my_resnet50()
res_my = model_my(img)