Playing with Flask, send_file and various configurations to configure performant HTTP range requests
Let's resume the download of our big files served from Flask
March 2019.
Introduction
We want our Flask app to be able to serve big files. We're going to have our routes return response objects using
send_file
.Let's see how this behaves with various stacks.
We're going to serve a 100MB file we just created ourself:
# dd if=/dev/urandom of=example_directory/big-file.dat bs=1m count=100
# stat example_directory/big-file.dat
82 1314112 -rw-r--r-- 1 root wheel 2702504 104857600 "Mar 2 20:56:11 2019" "Mar 2 20:55:15 2019" "Mar 2 20:55:15 2019" "Mar 2 19:45:39 2019" 32768 204864 0 example_directory/big-file.dat
The Flask app
This is a very simple Flask app, with its only route serving a big file:
from flask import Flask
from flask.helpers import send_file
app = Flask(__name__)
@app.route('/get-big-file')
def get_big_file():
return send_file('example_directory/big-file.dat', conditional=True)
if __name__ == '__main__':
app.run()
Running the application naked
Let's run the application directly, using python
# python flask_app.py
* Serving Flask app "flask_app" (lazy loading)
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
Let's ask for a range:
# curl -v -r 1024-2047 http://127.0.0.1:5000/get-big-file -o /dev/null
* Trying 127.0.0.1...
* TCP_NODELAY set
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Connected to 127.0.0.1 (127.0.0.1) port 5000 (#0)
> GET /get-big-file HTTP/1.1
> Host: 127.0.0.1:5000
> Range: bytes=1024-2047
> User-Agent: curl/7.62.0
> Accept: */*
>
* HTTP 1.0, assume close after body
< HTTP/1.0 206 PARTIAL CONTENT
< Content-Length: 1024
< Content-Type: application/octet-stream
< Last-Modified: Sat, 02 Mar 2019 18:45:39 GMT
< Cache-Control: public, max-age=43200
< Expires: Sun, 03 Mar 2019 06:48:57 GMT
< ETag: "1551552339.4-10485760-1578571130"
< Date: Sat, 02 Mar 2019 18:48:57 GMT
< Accept-Ranges: bytes
< Content-Range: bytes 1024-2047/10485760
< Server: Werkzeug/0.14.1 Python/2.7.15
<
{ [1024 bytes data]
100 1024 100 1024 0 0 500k 0 --:--:-- --:--:-- --:--:-- 500k
* Closing connection 0
All good! We got only the data we wanted.
The file was served entirely by Python and Flask's stack:
127.0.0.1 - - [02/Mar/2019 19:48:57] "GET /get-big-file HTTP/1.1" 206 -
Running the application with uWSGI
Discovering that send_file doesn't work with ranges
Let's run the application with uWSGI, using the HTTP server provided by the stack:
# uwsgi --http=127.0.0.1:5001 --master --wsgi-file=flask_app.py --callable=app --home=venv
*** Starting uWSGI 2.0.18 (64bit) on [Sat Mar 2 20:41:46 2019] ***
compiled with version: 4.2.1 Compatible FreeBSD Clang 6.0.0 (tags/RELEASE_600/final 326565) on 02 March 2019 19:37:03
os: FreeBSD-11.2-RELEASE-p5 FreeBSD 11.2-RELEASE-p5 #0: Tue Nov 27 09:33:52 UTC 2018 root@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC
nodename: example
machine: amd64
clock source: unix
pcre jit disabled
detected number of CPU cores: 2
current working directory: /usr/local/www/example
detected binary path: /usr/local/www/example/venv/bin/uwsgi
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
your processes number limit is 6656
your memory page size is 4096 bytes
detected max file descriptor number: 57987
lock engine: POSIX semaphores
thunder lock: disabled (you can enable it with --thunder-lock)
uWSGI http bound on 127.0.0.1:5001 fd 4
uwsgi socket 0 bound to TCP address 127.0.0.1:35025 (port auto-assigned) fd 3
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
Python version: 2.7.15 (default, Dec 6 2018, 01:13:45) [GCC 4.2.1 Compatible FreeBSD Clang 6.0.0 (tags/RELEASE_600/final 326565)]
Set PythonHome to venv
*** Python threads support is disabled. You can enable it with --enable-threads ***
Python main interpreter initialized at 0x804085000
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 145808 bytes (142 KB) for 1 cores
*** Operational MODE: single process ***
WSGI app 0 (mountpoint='') ready in 0 seconds on interpreter 0x804085000 pid: 95266 (default app)
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 95266)
spawned uWSGI worker 1 (pid: 95272, cores: 1)
spawned uWSGI http 1 (pid: 95273)
Let's ask for a range:
# curl -v -r 1024-2047 http://127.0.0.1:5001/get-big-file -o /dev/null
* Trying 127.0.0.1...
* TCP_NODELAY set
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Connected to 127.0.0.1 (127.0.0.1) port 5001 (#0)
> GET /get-big-file HTTP/1.1
> Host: 127.0.0.1:5001
> Range: bytes=1024-2047
> User-Agent: curl/7.62.0
> Accept: */*
>
< HTTP/1.1 206 PARTIAL CONTENT
< Content-Length: 1024
< Content-Type: application/octet-stream
< Last-Modified: Sat, 02 Mar 2019 18:45:39 GMT
< Cache-Control: public, max-age=43200
< Expires: Sun, 03 Mar 2019 07:42:38 GMT
< ETag: "1551552339.4-10485760-1578571130"
< Date: Sat, 02 Mar 2019 19:42:38 GMT
< Accept-Ranges: bytes
< Content-Range: bytes 1024-2047/10485760
<
{ [1024 bytes data]
100 1024 100 1024 0 0 44521 0 --:--:-- --:--:-- --:--:-- 44521
* Connection #0 to host 127.0.0.1 left intact
All good! We got only the data we wanted.
Let's investigate a big further.
Let's ask for 1024 bytes in the middle of the file:
# curl -v -r 52428800-52429823 http://127.0.0.1:5001/get-big-file -o /dev/null
* Trying 127.0.0.1...
* TCP_NODELAY set
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Connected to 127.0.0.1 (127.0.0.1) port 5001 (#0)
> GET /get-big-file HTTP/1.1
> Host: 127.0.0.1:5001
> Range: bytes=52428800-52429823
> User-Agent: curl/7.62.0
> Accept: */*
>
< HTTP/1.1 206 PARTIAL CONTENT
< Content-Length: 1024
< Content-Type: application/octet-stream
< Last-Modified: Sat, 02 Mar 2019 20:00:48 GMT
< Cache-Control: public, max-age=43200
< Expires: Sun, 03 Mar 2019 08:05:37 GMT
< ETag: "1551556848.45-104857600-1578571130"
< Date: Sat, 02 Mar 2019 20:05:37 GMT
< Accept-Ranges: bytes
< Content-Range: bytes 52428800-52429823/104857600
<
{ [139 bytes data]
100 1024 100 1024 0 0 2767 0 --:--:-- --:--:-- --:--:-- 2767
Let's trace the worker process and query the range again:
# truss -p 95272
kevent(4,0x0,0,{ 3,EVFILT_READ,0x0,0,0x1,0x0 },1,0x0) = 1 (0x1)
accept(3,{ AF_INET 127.0.0.1:16624 },0x8008aa10c) = 5 (0x5)
read(5,"\0`\^A\0\^N\0REQUEST_METHOD\^C\0"...,4100) = 356 (0x164)
open("/usr/local/www/example/example_directory/big-file.dat",O_RDONLY,0666) = 6 (0x6)
fstat(6,{ mode=-rw-r--r-- ,inode=1314112,size=104857600,blksize=32768 }) = 0 (0x0)
stat("/usr/local/www/example/example_directory/big-file.dat",{ mode=-rw-r--r-- ,inode=1314112,size=104857600,blksize=32768 }) = 0 (0x0)
stat("/usr/local/www/example/example_directory/big-file.dat",{ mode=-rw-r--r-- ,inode=1314112,size=104857600,blksize=32768 }) = 0 (0x0)
stat("/usr/local/www/example/example_directory/big-file.dat",{ mode=-rw-r--r-- ,inode=1314112,size=104857600,blksize=32768 }) = 0 (0x0)
stat("/usr/local/www/example/example_directory/big-file.dat",{ mode=-rw-r--r-- ,inode=1314112,size=104857600,blksize=32768 }) = 0 (0x0)
fstat(6,{ mode=-rw-r--r-- ,inode=1314112,size=104857600,blksize=32768 }) = 0 (0x0)
read(6,"\M-B!\M^CCg\M^[\M-[\M-*\M-A\M-=0"...,32768) = 32768 (0x8000)
read(6,"\^^E\M-]\M-W\M-H\M-TM\M-+\f\M-\"...,32768) = 32768 (0x8000)
read(6,"#\^Z\M-^\M-%YmwB8\M^J\M-%D70\M-x"...,32768) = 32768 (0x8000)
read(6,"\^N\^Y\^C\M^JQ\M-&\M-9\M^@\M-g"...,32768) = 32768 (0x8000)
read(6,"%2\M-w\M-h\M^P\M-z\M-f(\M^I\^Oa3"...,32768) = 32768 (0x8000)
read(6,"\M-t\^C44\M-K:\M-/t\M-Dy\M-C\M^J"...,32768) = 32768 (0x8000)
read(6,"\^Z\M-+Y+\M-{&A^\M-.\M^Z\M-"%%"...,32768) = 32768 (0x8000)
read(6,"\M^F\M-V\M-A\M-_\M^\"5\M^IN\M-C"...,32768) = 32768 (0x8000)
read(6,"F\M-f\M^B\M-~\M-:\M-,\M-t\M^T!"...,32768) = 32768 (0x8000)
read(6,"Q\M-j\rHcJ\M^O#E\M-N|I"](j`\M-d"...,32768) = 32768 (0x8000)
read(6,"\^Y8\M-O\M-Z\M-+^\M-Rd\M^R\M-*o"...,32768) = 32768 (0x8000)
...
[1590 more reads redacted]
...
writev(5,[{"HTTP/1.1 206 PARTIAL CONTENT\r\n"...,371},{"*dim\M^Jl[\M-V\M-6eg\M^Y\M^P\M^Q"...,139}],2) = 510 (0x1fe)
write(5,"\^SD1\M-^\M-[&\^O\M-J\M-5\M-g"...,83) = 83 (0x53)
write(5,"\M-P7\M-BT\M^Ww\^?i\^Aq@\M-I\M-("...,641) = 641 (0x281)
write(5,"\M-lii%\M-;B\^?f\n",9) = 9 (0x9)
write(5,"\M-R\M-F\M-BI !\M-N\M-zj\M-A\b"...,152) = 152 (0x98)
close(6) = 0 (0x0)
close(5) = 0 (0x0)
writev(2,[{"[pid: 95414|app: 0|req: 5/5] 127"...,208}],1) = 208 (0xd0)
Here we see 1601 calls to the system call
read
. The process read 1601*32768 bytes = 52461568. The entire file was read until the requested range was met. Here we read 50MB of data for nothing.So what's happening? Who did those reads? Was it python, or uWSGI?
Flask's
send_file
function calls werkzeug
's wrap_file
function. This function checks if a wsgi wrapper is provided in the request environment. In this case, there is one:
'wsgi.file_wrapper': <built-in function uwsgi_sendfile>
This means that the file descriptor is passed to a C function provided by uWSGI itself.
Let's confirm that this function is indeed called:
# gdb -p 95625
GNU gdb (GDB) 8.2 [GDB v8.2 for FreeBSD]
[...]
[Switching to LWP 101287 of process 95625]
0x000000080359981a in _kevent () from /lib/libc.so.7
(gdb) break py_uwsgi_sendfile
Breakpoint 1 at 0x48be84
(gdb) continue
Continuing.
Breakpoint 1, 0x000000000048be84 in py_uwsgi_sendfile ()
(gdb) bt
#0 0x000000000048be84 in py_uwsgi_sendfile ()
#1 0x0000000802f3b022 in PyEval_EvalFrameEx () from /usr/local/lib/libpython2.7.so.1
#2 0x0000000802f35277 in PyEval_EvalCodeEx () from /usr/local/lib/libpython2.7.so.1
#3 0x0000000802f3f5dd in ?? () from /usr/local/lib/libpython2.7.so.1
#4 0x0000000802f3ad1a in PyEval_EvalFrameEx () from /usr/local/lib/libpython2.7.so.1
#5 0x0000000802f35277 in PyEval_EvalCodeEx () from /usr/local/lib/libpython2.7.so.1
#6 0x0000000802f3f5dd in ?? () from /usr/local/lib/libpython2.7.so.1
#7 0x0000000802f3ad1a in PyEval_EvalFrameEx () from /usr/local/lib/libpython2.7.so.1
#8 0x0000000802f35277 in PyEval_EvalCodeEx () from /usr/local/lib/libpython2.7.so.1
#9 0x0000000802ec0bed in ?? () from /usr/local/lib/libpython2.7.so.1
#10 0x0000000802e9acf2 in PyObject_Call () from /usr/local/lib/libpython2.7.so.1
#11 0x0000000802f3bb0e in PyEval_EvalFrameEx () from /usr/local/lib/libpython2.7.so.1
#12 0x0000000802f3f6c4 in ?? () from /usr/local/lib/libpython2.7.so.1
#13 0x0000000802f3ad1a in PyEval_EvalFrameEx () from /usr/local/lib/libpython2.7.so.1
#14 0x0000000802f3f6c4 in ?? () from /usr/local/lib/libpython2.7.so.1
#15 0x0000000802f3ad1a in PyEval_EvalFrameEx () from /usr/local/lib/libpython2.7.so.1
#16 0x0000000802f3f6c4 in ?? () from /usr/local/lib/libpython2.7.so.1
#17 0x0000000802f3ad1a in PyEval_EvalFrameEx () from /usr/local/lib/libpython2.7.so.1
#18 0x0000000802f35277 in PyEval_EvalCodeEx () from /usr/local/lib/libpython2.7.so.1
#19 0x0000000802ec0bed in ?? () from /usr/local/lib/libpython2.7.so.1
#20 0x0000000802e9acf2 in PyObject_Call () from /usr/local/lib/libpython2.7.so.1
#21 0x0000000802ea77e9 in ?? () from /usr/local/lib/libpython2.7.so.1
#22 0x0000000802e9acf2 in PyObject_Call () from /usr/local/lib/libpython2.7.so.1
#23 0x0000000802ef90de in ?? () from /usr/local/lib/libpython2.7.so.1
#24 0x0000000802e9acf2 in PyObject_Call () from /usr/local/lib/libpython2.7.so.1
#25 0x0000000802f3efe6 in PyEval_CallObjectWithKeywords () from /usr/local/lib/libpython2.7.so.1
#26 0x0000000000489eb7 in python_call ()
#27 0x000000000048ccd9 in uwsgi_request_subhandler_wsgi ()
#28 0x000000000048bbb9 in uwsgi_request_wsgi ()
#29 0x00000000004253b3 in wsgi_req_recv ()
#30 0x000000000046d438 in simple_loop_run ()
#31 0x000000000046d2a0 in simple_loop ()
#32 0x0000000000474a86 in uwsgi_ignition ()
#33 0x000000000047489d in uwsgi_worker_run ()
#34 0x0000000000472481 in uwsgi_run ()
#35 0x000000000046fc9e in main ()
(gdb) continue
It is. Good.
But then why isn't
sendfile
called, and why are we reading the file for nothing?Calling
py_uwsgi_sendfile
sets the sendfile object in the current request object, and returns a pointer to the response method that indeed calls sendfile.Then, when the time of response serialization comes,
uwsgi_response_subhandler_wsgi
checks if the result of the application call is that method. If it is, it uses sendfile, if it's not, it delegates the serialization to the python stack. Let's debug the uWSGI worker and see what we're using.
# gdb -p 96278
(gdb) break uwsgi_response_subhandler_wsgi:256
Breakpoint 1 at 0x48cd01: file plugins/python/wsgi_subhandler.c, line 252.
(gdb) continue
Continuing.
Breakpoint 1, uwsgi_response_subhandler_wsgi (wsgi_req=0x8008aa078) at plugins/python/wsgi_subhandler.c:252
252 plugins/python/wsgi_subhandler.c: No such file or directory.
(gdb) p wsgi_req->sendfile_obj
$1 = (void *) 0x807197420
(gdb) p wsgi_req->async_result
$2 = (void *) 0x8071c3cd0
The two pointers are different. We're not using sendfile at all!
Digging further reveals that the python iterable used to serialize the response is an instance of class _RangeWrapper.
Ok. So why are we reading the file sequentially and not using any random access?
In this class, the following line is of interest:
self.seekable = hasattr(iterable, 'seekable') and iterable.seekable()
Let's experiment with a python shell:
>>> file = open('example_directory/big-file.dat', 'rb')
>>> hasattr(file, 'seekable') and file.seekable()
False
Ok.
Let's recap what we discovered so far:
- Calling flask's
send_file
withconditional=True
function does not translate to using the actual sendfile mechanism of uWSGI, and thus not the one from the operating system. - Flask and Werkzeug end up generating the response entirely themselves, in the worst way possible (reading the file to seek to a specific position).
Digging further, this all makes sense. When uWSGI is using sendfile, it sends the entire file.
The code says it all: offset and len are forced to 0.
if (wsgi_req->sendfile_fd >= 0) {
uWSGI_RELEASE_GIL
uwsgi_response_sendfile_do(wsgi_req, wsgi_req->sendfile_fd, 0, 0);
uWSGI_GET_GIL
}
With that code, it's impossible to honor range requests.
This is why Flask is instead (badly) handling them itself using python code.
Serving the file directly with uWSGI
Let's add the
--check-static
option to have uWSGI serve the file directly instead of asking the python stack:
uwsgi --http=127.0.0.1:5001 \
--master --wsgi-file=flask_app.py \
--callable=app \
--home=venv \
--workers=1 --processes=1 \
--check-static ./example_directory/
Let's curl the file:
# curl -v http://127.0.0.1:5001/big-file.dat -o /dev/null
* Trying 127.0.0.1...
* TCP_NODELAY set
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Connected to 127.0.0.1 (127.0.0.1) port 5001 (#0)
> GET /big-file.dat HTTP/1.1
> Host: 127.0.0.1:5001
> User-Agent: curl/7.62.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Length: 104857600
< Last-Modified: Sat, 02 Mar 2019 20:00:48 GMT
<
{ [4004 bytes data]
100 100M 100 100M 0 0 393M 0 --:--:-- --:--:-- --:--:-- 393M
* Connection #0 to host 127.0.0.1 left intact
uWSGI confirms in the log file that it served the file itself using sendfile:
[pid: 99088|app: -1|req: -1/2] 127.0.0.1 () {28 vars in 315 bytes} [Sun Mar 3 11:41:37 2019] GET /get-big-file => generated 104857600 bytes in 302 msecs via sendfile() (HTTP/1.1
200) 2 headers in 92 bytes (1180 switches on core 0)
Trussing the worker process confirms it:
kevent(4,0x0,0,{ 3,EVFILT_READ,0x0,0,0x1,0x0 },1,0x0) = 1 (0x1)
accept(3,{ AF_INET 127.0.0.1:20028 },0x8008aa10c) = 5 (0x5)
read(5,"\0;\^A\0\^N\0REQUEST_METHOD\^C\0"...,4100) = 319 (0x13f)
lstat("/root",{ mode=drwxr-xr-x ,inode=1203840,size=3072,blksize=32768 }) = 0 (0x0)
lstat("/root/flask_nginx_ranges",{ mode=drwxr-xr-x ,inode=1314094,size=512,blksize=32768 }) = 0 (0x0)
lstat("/usr/local/www/example/example_directory",{ mode=drwxr-xr-x ,inode=1314098,size=512,blksize=32768 }) = 0 (0x0)
lstat("/usr/local/www/example/example_directory/big-file.dat",{ mode=-rw-r--r-- ,inode=1314112,size=104857600,blksize=32768 }) = 0 (0x0)
stat("/usr/local/www/example/example_directory/big-file.dat",{ mode=-rw-r--r-- ,inode=1314112,size=104857600,blksize=32768 }) = 0 (0x0)
openat(AT_FDCWD,"/usr/local/www/example/example_directory/big-file.dat",O_RDONLY,00) = 6 (0x6)
write(5,"HTTP/1.1 200 OK\r\nContent-Lengt"...,92) = 92 (0x5c)
sendfile(0x6,0x5,0x0,0x6400000,0x0,0x7fffffffe120,0x7fff00000000) ERR#35 'Resource temporarily unavailable'
poll({ 5/POLLOUT },1,4000) = 1 (0x1)
sendfile(0x6,0x5,0xbf2c,0x63f40d4,0x0,0x7fffffffe120,0x7fff00000000) ERR#35 'Resource temporarily unavailable'
poll({ 5/POLLOUT },1,4000) = 1 (0x1)
sendfile(0x6,0x5,0x29e3c,0x63d61c4,0x0,0x7fffffffe120,0x7fff00000000) ERR#35 'Resource temporarily unavailable'
poll({ 5/POLLOUT },1,4000) = 1 (0x1)
sendfile(0x6,0x5,0x39dc4,0x63c623c,0x0,0x7fffffffe120,0x7fff00000000) ERR#35 'Resource temporarily unavailable'
poll({ 5/POLLOUT },1,4000) = 1 (0x1)
[...]
[repeats multiple times]
[...]
sendfile(0x6,0x5,0x6347c18,0xb83e8,0x0,0x7fffffffe120,0x7fff00000000) ERR#35 'Resource temporarily unavailable'
poll({ 5/POLLOUT },1,4000) = 1 (0x1)
sendfile(0x6,0x5,0x63878d8,0x78728,0x0,0x7fffffffe120,0x7fff00000000) ERR#35 'Resource temporarily unavailable'
poll({ 5/POLLOUT },1,4000) = 1 (0x1)
sendfile(0x6,0x5,0x63c7598,0x38a68,0x0,0x7fffffffe120,0x7fff00000000) = 0 (0x0)
close(6) = 0 (0x0)
close(5) = 0 (0x0)
writev(2,[{"[pid: 99088|app: -1|req: -1/3] 1"...,231}],1) = 231 (0xe7)
Let's ask for a range:
# curl -v -r 52428800-52429823 http://127.0.0.1:5001/get-big-file -o /dev/null
* Trying 127.0.0.1...
* TCP_NODELAY set
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Connected to 127.0.0.1 (127.0.0.1) port 5001 (#0)
> GET /get-big-file HTTP/1.1
> Host: 127.0.0.1:5001
> Range: bytes=52428800-52429823
> User-Agent: curl/7.62.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Length: 104857600
< Last-Modified: Sat, 02 Mar 2019 20:00:48 GMT
<
{ [4004 bytes data]
100 100M 100 100M 0 0 348M 0 --:--:-- --:--:-- --:--:-- 348M
* Connection #0 to host 127.0.0.1 left intact
Hum, we got the entire file.
uWSGI does not honor ranges by default.
Let's restart the server with option --honour-range set.
uwsgi --http=127.0.0.1:5001 \
--master \
--wsgi-file=flask_app.py \
--callable=app \
--home=venv \
--workers=1 --processes=1 \
--static-map=/get-big-file=example_directory/big-file.dat \
--honour-range
Query a range:
# curl -v -r 52428800-52429823 http://127.0.0.1:5001/get-big-file -o /dev/null
* Trying 127.0.0.1...
* TCP_NODELAY set
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Connected to 127.0.0.1 (127.0.0.1) port 5001 (#0)
> GET /get-big-file HTTP/1.1
> Host: 127.0.0.1:5001
> Range: bytes=52428800-52429823
> User-Agent: curl/7.62.0
> Accept: */*
>
< HTTP/1.1 206 Partial Content
< Content-Length: 1024
< Content-Range: bytes 52428800-52429823/104857600
< Last-Modified: Sat, 02 Mar 2019 20:00:48 GMT
<
{ [1024 bytes data]
100 1024 100 1024 0 0 333k 0 --:--:-- --:--:-- --:--:-- 500k
* Connection #0 to host 127.0.0.1 left intact
This time we got our range.
Truss confirms that the sendfile syscall was only called once to only copy the 1024 bytes we wanted:
kevent(4,0x0,0,{ 3,EVFILT_READ,0x0,0,0x1,0x0 },1,0x0) = 1 (0x1)
accept(3,{ AF_INET 127.0.0.1:26328 },0x8008aa10c) = 5 (0x5)
read(5,"\0`\^A\0\^N\0REQUEST_METHOD\^C\0"...,4100) = 356 (0x164)
__getcwd("/root/flask_nginx_ranges",1024) = 0 (0x0)
lstat("/usr/local/www/example/example_directory",{ mode=drwxr-xr-x ,inode=1314098,size=512,blksize=32768 }) = 0 (0x0)
lstat("/usr/local/www/example/example_directory/big-file.dat",{ mode=-rw-r--r-- ,inode=1314112,size=104857600,blksize=32768 }) = 0 (0x0)
stat("/usr/local/www/example/example_directory/big-file.dat",{ mode=-rw-r--r-- ,inode=1314112,size=104857600,blksize=32768 }) = 0 (0x0)
lstat("/root",{ mode=drwxr-xr-x ,inode=1203840,size=3072,blksize=32768 }) = 0 (0x0)
lstat("/root/flask_nginx_ranges",{ mode=drwxr-xr-x ,inode=1314094,size=512,blksize=32768 }) = 0 (0x0)
lstat("/usr/local/www/example/example_directory",{ mode=drwxr-xr-x ,inode=1314098,size=512,blksize=32768 }) = 0 (0x0)
lstat("/usr/local/www/example/example_directory/big-file.dat",{ mode=-rw-r--r-- ,inode=1314112,size=104857600,blksize=32768 }) = 0 (0x0)
stat("/usr/local/www/example/example_directory/big-file.dat",{ mode=-rw-r--r-- ,inode=1314112,size=104857600,blksize=32768 }) = 0 (0x0)
issetugid() = 0 (0x0)
open("/usr/share/zoneinfo/UTC",O_RDONLY,00) = 6 (0x6)
fstat(6,{ mode=-r--r--r-- ,inode=2087312,size=118,blksize=32768 }) = 0 (0x0)
read(6,"TZif2\0\0\0\0\0\0\0\0\0\0\0\0\0"...,41448) = 118 (0x76)
close(6) = 0 (0x0)
issetugid() = 0 (0x0)
open("/usr/share/zoneinfo/posixrules",O_RDONLY,06423226000) = 6 (0x6)
fstat(6,{ mode=-r--r--r-- ,inode=2087322,size=3519,blksize=32768 }) = 0 (0x0)
read(6,"TZif2\0\0\0\0\0\0\0\0\0\0\0\0\0"...,41448) = 3519 (0xdbf)
close(6) = 0 (0x0)
openat(AT_FDCWD,"/usr/local/www/example/example_directory/big-file.dat",O_RDONLY,00) = 6 (0x6)
write(5,"HTTP/1.1 206 Partial Content\r\n"...,150) = 150 (0x96)
sendfile(0x6,0x5,0x3200000,0x400,0x0,0x7fffffffe110,0xffffffff00000000) = 0 (0x0)
close(6) = 0 (0x0)
close(5) = 0 (0x0)
writev(2,[{"[pid: 99164|app: -1|req: -1/1] 1"...,223}],1) = 223 (0xdf)
Serving files this way is extremely efficient. However, it completely bypasses the python stack, which can be a problem if you want to serve files depending on ACLs or other logic.
Putting NGINX in front
Let's put NGINX as a reverse proxy in front of our application.
Let's try to use the X-Accel feature, which allows the application layer to tell NGINX to serve the file directly.
daemon off;
master_process off;
user www;
error_log stderr debug;
events {
worker_connections 2048;
}
http {
server {
sendfile on;
listen 5002;
location /usr/local/www/example/example_directory {
internal;
alias /usr/local/www/example/example_directory;
}
location / {
include uwsgi_params;
uwsgi_pass unix:/usr/local/www/example/uwsgi.sock;
}
}
}
We need to start uWSGI with
--file-serve-mode=nginx
to tell it to use the X-Accel
feature.
uwsgi --socket=uwsgi.sock \
--master \
--wsgi-file=flask_app.py \
--callable=app \
--home=venv \
--workers=1 --processes=1 \
--static-map=/get-big-file=./example_directory/big-file.dat \
--file-serve-mode=nginx
Let's start NGINX:
nginx -c nginx.conf
Let's query a range:
# curl -v -r 52428800-52429823 http://127.0.0.1:5002/get-big-file -o /dev/null
* Trying 127.0.0.1...
* TCP_NODELAY set
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Connected to 127.0.0.1 (127.0.0.1) port 5002 (#0)
> GET /get-big-file HTTP/1.1
> Host: 127.0.0.1:5002
> Range: bytes=52428800-52429823
> User-Agent: curl/7.62.0
> Accept: */*
>
< HTTP/1.1 206 Partial Content
< Server: nginx/1.14.2
< Date: Sun, 03 Mar 2019 11:52:41 GMT
< Content-Type: text/plain
< Content-Length: 1024
< Last-Modified: Sat, 02 Mar 2019 20:00:48 GMT
< Connection: keep-alive
< ETag: "5c7ae0f0-6400000"
< Content-Range: bytes 52428800-52429823/104857600
<
{ [1024 bytes data]
100 1024 100 1024 0 0 1000k 0 --:--:-- --:--:-- --:--:-- 1000k
* Connection #0 to host 127.0.0.1 left intact
We got our range!
NGINX confirms that uWSGI simply returned an empty response with header
X-Accel-Redirect
set:
2019/03/03 12:52:41 [debug] 7723#100110: *3 http run request: "/get-big-file?"
2019/03/03 12:52:41 [debug] 7723#100110: *3 http upstream check client, write event:1, "/get-big-file"
2019/03/03 12:52:41 [debug] 7723#100110: timer delta: 0
2019/03/03 12:52:41 [debug] 7723#100110: worker cycle
2019/03/03 12:52:41 [debug] 7723#100110: kevent timer: 60000, changes: 0
2019/03/03 12:52:41 [debug] 7723#100110: kevent events: 1
2019/03/03 12:52:41 [debug] 7723#100110: kevent: 7: ft:-1 fl:8020 ff:00000000 d:140 ud:00000008023873D1
2019/03/03 12:52:41 [debug] 7723#100110: *3 http upstream request: "/get-big-file?"
2019/03/03 12:52:41 [debug] 7723#100110: *3 http upstream process header
2019/03/03 12:52:41 [debug] 7723#100110: *3 malloc: 0000000802268000:4096
2019/03/03 12:52:41 [debug] 7723#100110: *3 recv: eof:1, avail:140, err:0
2019/03/03 12:52:41 [debug] 7723#100110: *3 recv: fd:7 140 of 4096
2019/03/03 12:52:41 [debug] 7723#100110: *3 http uwsgi status 200 "200 OK"
2019/03/03 12:52:41 [debug] 7723#100110: *3 http uwsgi header: "X-Accel-Redirect: /usr/local/www/example/example_directory/big-file.dat"
2019/03/03 12:52:41 [debug] 7723#100110: *3 http uwsgi header: "Last-Modified: Sat, 02 Mar 2019 20:00:48 GMT"
2019/03/03 12:52:41 [debug] 7723#100110: *3 http uwsgi header done
2019/03/03 12:52:41 [debug] 7723#100110: *3 finalize http upstream request: -5
2019/03/03 12:52:41 [debug] 7723#100110: *3 finalize http uwsgi request
2019/03/03 12:52:41 [debug] 7723#100110: *3 free rr peer 1 0
2019/03/03 12:52:41 [debug] 7723#100110: *3 close http upstream connection: 7
2019/03/03 12:52:41 [debug] 7723#100110: *3 free: 0000000802233780, unused: 48
2019/03/03 12:52:41 [debug] 7723#100110: *3 event timer del: 7: 3615617306
2019/03/03 12:52:41 [debug] 7723#100110: *3 reusable connection: 0
2019/03/03 12:52:41 [debug] 7723#100110: *3 internal redirect: "/usr/local/www/example/example_directory/big-file.dat?"
2019/03/03 12:52:41 [debug] 7723#100110: *3 rewrite phase: 1
2019/03/03 12:52:41 [debug] 7723#100110: *3 test location: "/"
2019/03/03 12:52:41 [debug] 7723#100110: *3 test location: "/usr/local/www/example/example_directory"
2019/03/03 12:52:41 [debug] 7723#100110: *3 using configuration "/usr/local/www/example/example_directory"
2019/03/03 12:52:41 [debug] 7723#100110: *3 http cl:-1 max:1048576
2019/03/03 12:52:41 [debug] 7723#100110: *3 rewrite phase: 3
2019/03/03 12:52:41 [debug] 7723#100110: *3 post rewrite phase: 4
Trussing the NGINX process confirms that we used the senfile syscall to serve the range:
openat(AT_FDCWD,"/usr/local/www/example/example_directory/big-file.dat",O_RDONLY|O_NONBLOCK,00) = 8 (0x8)
fstat(8,{ mode=-rw-r--r-- ,inode=1314112,size=104857600,blksize=32768 }) = 0 (0x0)
sendfile(0x8,0x7,0x3200000,0x400,0x7fffffffdd88,0x7fffffffde40,0x0) = 0 (0x0)
write(4,"127.0.0.1 - - [03/Mar/2019:12:57:55 +0100] "GET /get-big-file HTTP/1.1" 206 1024 "-" "curl/7.62.0"\n",99) = 99 (0x63)
close(8) = 0 (0x0)
Wrapping up
What have we learned?
- We learned that Flask's
send_file
function does not use any optimized mechanism to output the file when optionconditional=True
is set. Worse, when a range is requested, it actually reads the entire file sequentially until the range position is met - We learned that uWSGI's option
--file-serve-mode
is not used for application results. It's only used when serving static files. - In consequence, we learned that there is no way to conditionally and efficiently serve files using Flask+uWSGI that clients can request using HTTP ranges
In the end, the easiest Flask configuration if you configured NGINX correctly is to simply return a header:
from flask import make_response
@app.route('/get-big-file')
def get_big_file():
response = make_response()
response.headers['X-Accel-Redirect'] = 'example_directory/big-file.dat'
return response
NGINX then executes an internal redirect and all the request headers (ETag, Cache headers, Ranges) are handled without us having to worry about them.
We can even offload serving the file to another server using
proxy_pass
. This allows splitting the infrastructure into two parts: application servers (with big-CPU servers), and CDN servers (with a lot of bandwidth) while being able to use ACLs or application logic to decide whether or not a request is allowed to access a file.