Example HTTP/2-only WSGI Server

This example is a more complex HTTP/2 server that acts as a WSGI server, passing data to an arbitrary WSGI application. This example is written using asyncio. The server supports most of PEP-3333, and so could in principle be used as a production WSGI server: however, that’s not recommended as certain shortcuts have been taken to ensure ease of implementation and understanding.

The main advantages of this example are:

  1. It properly demonstrates HTTP/2 flow control management.

  2. It demonstrates how to plug h2 into a larger, more complex application.

  1# -*- coding: utf-8 -*-
  2"""
  3asyncio-server.py
  4~~~~~~~~~~~~~~~~~
  5
  6A fully-functional WSGI server, written using h2. Requires asyncio.
  7
  8To test it, try installing httpbin from pip (``pip install httpbin``) and then
  9running the server (``python asyncio-server.py httpbin:app``).
 10
 11This server does not support HTTP/1.1: it is a HTTP/2-only WSGI server. The
 12purpose of this code is to demonstrate how to integrate h2 into a more
 13complex application, and to demonstrate several principles of concurrent
 14programming.
 15
 16The architecture looks like this:
 17
 18+---------------------------------+
 19|     1x HTTP/2 Server Thread     |
 20|        (running asyncio)        |
 21+---------------------------------+
 22+---------------------------------+
 23|    N WSGI Application Threads   |
 24|           (no asyncio)          |
 25+---------------------------------+
 26
 27Essentially, we spin up an asyncio-based event loop in the main thread. This
 28launches one HTTP/2 Protocol instance for each inbound connection, all of which
 29will read and write data from within the main thread in an asynchronous manner.
 30
 31When each HTTP request comes in, the server will build the WSGI environment
 32dictionary and create a ``Stream`` object. This object will hold the relevant
 33state for the request/response pair and will act as the WSGI side of the logic.
 34That object will then be passed to a background thread pool, and when a worker
 35is available the WSGI logic will begin to be executed. This model ensures that
 36the asyncio web server itself is never blocked by the WSGI application.
 37
 38The WSGI application and the HTTP/2 server communicate via an asyncio queue,
 39together with locks and threading events. The locks themselves are implicit in
 40asyncio's "call_soon_threadsafe", which allows for a background thread to
 41register an action with the main asyncio thread. When the asyncio thread
 42eventually takes the action in question it sets as threading event, signaling
 43to the background thread that it is free to continue its work.
 44
 45To make the WSGI application work with flow control, there is a very important
 46invariant that must be observed. Any WSGI action that would cause data to be
 47emitted to the network MUST be accompanied by a threading Event that is not
 48set until that data has been written to the transport. This ensures that the
 49WSGI application *blocks* until the data is actually sent. The reason we
 50require this invariant is that the HTTP/2 server may choose to re-order some
 51data chunks for flow control reasons: that is, the application for stream X may
 52have actually written its data first, but the server may elect to send the data
 53for stream Y first. This means that it's vital that there not be *two* writes
 54for stream X active at any one point or they may get reordered, which would be
 55particularly terrible.
 56
 57Thus, the server must cooperate to ensure that each threading event only fires
 58when the *complete* data for that event has been written to the asyncio
 59transport. Any earlier will cause untold craziness.
 60"""
 61import asyncio
 62import importlib
 63import queue
 64import ssl
 65import sys
 66import threading
 67
 68from h2.config import H2Configuration
 69from h2.connection import H2Connection
 70from h2.events import (
 71    DataReceived, RequestReceived, WindowUpdated, StreamEnded, StreamReset
 72)
 73
 74
 75# Used to signal that a request has completed.
 76#
 77# This is a convenient way to do "in-band" signaling of stream completion
 78# without doing anything so heavyweight as using a class. Essentially, we can
 79# test identity against this empty object. In fact, this is so convenient that
 80# we use this object for all streams, for data in both directions: in and out.
 81END_DATA_SENTINEL = object()
 82
 83# The WSGI callable. Stored here so that the protocol instances can get hold
 84# of the data.
 85APPLICATION = None
 86
 87
 88class H2Protocol(asyncio.Protocol):
 89    def __init__(self):
 90        config = H2Configuration(client_side=False, header_encoding='utf-8')
 91
 92        # Our server-side state machine.
 93        self.conn = H2Connection(config=config)
 94
 95        # The backing transport.
 96        self.transport = None
 97
 98        # A dictionary of ``Stream`` objects, keyed by their stream ID. This
 99        # makes it easy to route data to the correct WSGI application instance.
100        self.streams = {}
101
102        # A queue of data emitted by WSGI applications that has not yet been
103        # sent. Each stream may only have one chunk of data in either this
104        # queue or the flow_controlled_data dictionary at any one time.
105        self._stream_data = asyncio.Queue()
106
107        # Data that has been pulled off the queue that is for a stream blocked
108        # behind flow control limitations. This is used to avoid spinning on
109        # _stream_data queue when a stream cannot have its data sent. Data that
110        # cannot be sent on the connection when it is popped off the queue gets
111        # placed here until the stream flow control window opens up again.
112        self._flow_controlled_data = {}
113
114        # A reference to the loop in which this protocol runs. This is needed
115        # to synchronise up with background threads.
116        self._loop = asyncio.get_event_loop()
117
118        # Any streams that have been remotely reset. We keep track of these to
119        # ensure that we don't emit data from a WSGI application whose stream
120        # has been cancelled.
121        self._reset_streams = set()
122
123        # Keep track of the loop sending task so we can kill it when the
124        # connection goes away.
125        self._send_loop_task = None
126
127    def connection_made(self, transport):
128        """
129        The connection has been made. Here we need to save off our transport,
130        do basic HTTP/2 connection setup, and then start our data writing
131        coroutine.
132        """
133        self.transport = transport
134        self.conn.initiate_connection()
135        self.transport.write(self.conn.data_to_send())
136        self._send_loop_task = self._loop.create_task(self.sending_loop())
137
138    def connection_lost(self, exc):
139        """
140        With the end of the connection, we just want to cancel our data sending
141        coroutine.
142        """
143        self._send_loop_task.cancel()
144
145    def data_received(self, data):
146        """
147        Process inbound data.
148        """
149        events = self.conn.receive_data(data)
150
151        for event in events:
152            if isinstance(event, RequestReceived):
153                self.request_received(event)
154            elif isinstance(event, DataReceived):
155                self.data_frame_received(event)
156            elif isinstance(event, WindowUpdated):
157                self.window_opened(event)
158            elif isinstance(event, StreamEnded):
159                self.end_stream(event)
160            elif isinstance(event, StreamReset):
161                self.reset_stream(event)
162
163        outbound_data = self.conn.data_to_send()
164        if outbound_data:
165            self.transport.write(outbound_data)
166
167    def window_opened(self, event):
168        """
169        The flow control window got opened.
170
171        This is important because it's possible that we were unable to send
172        some WSGI data because the flow control window was too small. If that
173        happens, the sending_loop coroutine starts buffering data.
174
175        As the window gets opened, we need to unbuffer the data. We do that by
176        placing the data chunks back on the back of the send queue and letting
177        the sending loop take another shot at sending them.
178
179        This system only works because we require that each stream only have
180        *one* data chunk in the sending queue at any time. The threading events
181        force this invariant to remain true.
182        """
183        if event.stream_id:
184            # This is specific to a single stream.
185            if event.stream_id in self._flow_controlled_data:
186                self._stream_data.put_nowait(
187                    self._flow_controlled_data.pop(event.stream_id)
188                )
189        else:
190            # This event is specific to the connection. Free up *all* the
191            # streams. This is a bit tricky, but we *must not* yield the flow
192            # of control here or it all goes wrong.
193            for data in self._flow_controlled_data.values():
194                self._stream_data.put_nowait(data)
195
196            self._flow_controlled_data = {}
197
198    @asyncio.coroutine
199    def sending_loop(self):
200        """
201        A call that loops forever, attempting to send data. This sending loop
202        contains most of the flow-control smarts of this class: it pulls data
203        off of the asyncio queue and then attempts to send it.
204
205        The difficulties here are all around flow control. Specifically, a
206        chunk of data may be too large to send. In this case, what will happen
207        is that this coroutine will attempt to send what it can and will then
208        store the unsent data locally. When a flow control event comes in that
209        data will be freed up and placed back onto the asyncio queue, causing
210        it to pop back up into the sending logic of this coroutine.
211
212        This method explicitly *does not* handle HTTP/2 priority. That adds an
213        extra layer of complexity to what is already a fairly complex method,
214        and we'll look at how to do it another time.
215
216        This coroutine explicitly *does not end*.
217        """
218        while True:
219            stream_id, data, event = yield from self._stream_data.get()
220
221            # If this stream got reset, just drop the data on the floor. Note
222            # that we need to reset the event here to make sure that
223            # application doesn't lock up.
224            if stream_id in self._reset_streams:
225                event.set()
226
227            # Check if the body is done. If it is, this is really easy! Again,
228            # we *must* set the event here or the application will lock up.
229            if data is END_DATA_SENTINEL:
230                self.conn.end_stream(stream_id)
231                self.transport.write(self.conn.data_to_send())
232                event.set()
233                continue
234
235            # We need to send data, but not to exceed the flow control window.
236            # For that reason, grab only the data that fits: we'll buffer the
237            # rest.
238            window_size = self.conn.local_flow_control_window(stream_id)
239            chunk_size = min(window_size, len(data))
240            data_to_send = data[:chunk_size]
241            data_to_buffer = data[chunk_size:]
242
243            if data_to_send:
244                # There's a maximum frame size we have to respect. Because we
245                # aren't paying any attention to priority here, we can quite
246                # safely just split this string up into chunks of max frame
247                # size and blast them out.
248                #
249                # In a *real* application you'd want to consider priority here.
250                max_size = self.conn.max_outbound_frame_size
251                chunks = (
252                    data_to_send[x:x+max_size]
253                    for x in range(0, len(data_to_send), max_size)
254                )
255                for chunk in chunks:
256                    self.conn.send_data(stream_id, chunk)
257                self.transport.write(self.conn.data_to_send())
258
259            # If there's data left to buffer, we should do that. Put it in a
260            # dictionary and *don't set the event*: the app must not generate
261            # any more data until we got rid of all of this data.
262            if data_to_buffer:
263                self._flow_controlled_data[stream_id] = (
264                    stream_id, data_to_buffer, event
265                )
266            else:
267                # We sent everything. We can let the WSGI app progress.
268                event.set()
269
270    def request_received(self, event):
271        """
272        A HTTP/2 request has been received. We need to invoke the WSGI
273        application in a background thread to handle it.
274        """
275        # First, we are going to want an object to hold all the relevant state
276        # for this request/response. For that, we have a stream object. We
277        # need to store the stream object somewhere reachable for when data
278        # arrives later.
279        s = Stream(event.stream_id, self)
280        self.streams[event.stream_id] = s
281
282        # Next, we need to build the WSGI environ dictionary.
283        environ = _build_environ_dict(event.headers, s)
284
285        # Finally, we want to throw these arguments out to a threadpool and
286        # let it run.
287        self._loop.run_in_executor(
288            None,
289            s.run_in_threadpool,
290            APPLICATION,
291            environ,
292        )
293
294    def data_frame_received(self, event):
295        """
296        Data has been received by WSGI server and needs to be dispatched to a
297        running application.
298
299        Note that the flow control window is not modified here. That's
300        deliberate: see Stream.__next__ for a longer discussion of why.
301        """
302        # Grab the stream in question from our dictionary and pass it on.
303        stream = self.streams[event.stream_id]
304        stream.receive_data(event.data, event.flow_controlled_length)
305
306    def end_stream(self, event):
307        """
308        The stream data is complete.
309        """
310        stream = self.streams[event.stream_id]
311        stream.request_complete()
312
313    def reset_stream(self, event):
314        """
315        A stream got forcefully reset.
316
317        This is a tricky thing to deal with because WSGI doesn't really have a
318        good notion for it. Essentially, you have to let the application run
319        until completion, but not actually let it send any data.
320
321        We do that by discarding any data we currently have for it, and then
322        marking the stream as reset to allow us to spot when that stream is
323        trying to send data and drop that data on the floor.
324
325        We then *also* signal the WSGI application that no more data is
326        incoming, to ensure that it does not attempt to do further reads of the
327        data.
328        """
329        if event.stream_id in self._flow_controlled_data:
330            del self._flow_controlled_data
331
332        self._reset_streams.add(event.stream_id)
333        self.end_stream(event)
334
335    def data_for_stream(self, stream_id, data):
336        """
337        Thread-safe method called from outside the main asyncio thread in order
338        to send data on behalf of a WSGI application.
339
340        Places data being written by a stream on an asyncio queue. Returns a
341        threading event that will fire when that data is sent.
342        """
343        event = threading.Event()
344        self._loop.call_soon_threadsafe(
345            self._stream_data.put_nowait,
346            (stream_id, data, event)
347        )
348        return event
349
350    def send_response(self, stream_id, headers):
351        """
352        Thread-safe method called from outside the main asyncio thread in order
353        to send the HTTP response headers on behalf of a WSGI application.
354
355        Returns a threading event that will fire when the headers have been
356        emitted to the network.
357        """
358        event = threading.Event()
359
360        def _inner_send(stream_id, headers, event):
361            self.conn.send_headers(stream_id, headers, end_stream=False)
362            self.transport.write(self.conn.data_to_send())
363            event.set()
364
365        self._loop.call_soon_threadsafe(
366            _inner_send,
367            stream_id,
368            headers,
369            event
370        )
371        return event
372
373    def open_flow_control_window(self, stream_id, increment):
374        """
375        Opens a flow control window for the given stream by the given amount.
376        Called from a WSGI thread. Does not return an event because there's no
377        need to block on this action, it may take place at any time.
378        """
379        def _inner_open(stream_id, increment):
380            self.conn.increment_flow_control_window(increment, stream_id)
381            self.conn.increment_flow_control_window(increment, None)
382            self.transport.write(self.conn.data_to_send())
383
384        self._loop.call_soon_threadsafe(
385            _inner_open,
386            stream_id,
387            increment,
388        )
389
390
391class Stream:
392    """
393    This class holds all of the state for a single stream. It also provides
394    several of the callables used by the WSGI application. Finally, it provides
395    the logic for actually interfacing with the WSGI application.
396
397    For these reasons, the object has *strict* requirements on thread-safety.
398    While the object can be initialized in the main WSGI thread, the
399    ``run_in_threadpool`` method *must* be called from outside that thread. At
400    that point, the main WSGI thread may only call specific methods.
401    """
402    def __init__(self, stream_id, protocol):
403        self.stream_id = stream_id
404        self._protocol = protocol
405
406        # Queue for data that has been received from the network. This is a
407        # thread-safe queue, to allow both the WSGI application to block on
408        # receiving more data and to allow the asyncio server to keep sending
409        # more data.
410        #
411        # This queue is unbounded in size, but in practice it cannot contain
412        # too much data because the flow control window doesn't get adjusted
413        # unless data is removed from it.
414        self._received_data = queue.Queue()
415
416        # This buffer is used to hold partial chunks of data from
417        # _received_data that were not returned out of ``read`` and friends.
418        self._temp_buffer = b''
419
420        # Temporary variables that allow us to keep hold of the headers and
421        # response status until such time as the application needs us to send
422        # them.
423        self._response_status = b''
424        self._response_headers = []
425        self._headers_emitted = False
426
427        # Whether the application has received all the data from the network
428        # or not. This allows us to short-circuit some reads.
429        self._complete = False
430
431    def receive_data(self, data, flow_controlled_size):
432        """
433        Called by the H2Protocol when more data has been received from the
434        network.
435
436        Places the data directly on the queue in a thread-safe manner without
437        blocking. Does not introspect or process the data.
438        """
439        self._received_data.put_nowait((data, flow_controlled_size))
440
441    def request_complete(self):
442        """
443        Called by the H2Protocol when all the request data has been received.
444
445        This works by placing the ``END_DATA_SENTINEL`` on the queue. The
446        reading code knows, when it sees the ``END_DATA_SENTINEL``, to expect
447        no more data from the network. This ensures that the state of the
448        application only changes when it has finished processing the data from
449        the network, even though the server may have long-since finished
450        receiving all the data for this request.
451        """
452        self._received_data.put_nowait((END_DATA_SENTINEL, None))
453
454    def run_in_threadpool(self, wsgi_application, environ):
455        """
456        This method should be invoked in a threadpool. At the point this method
457        is invoked, the only safe methods to call from the original thread are
458        ``receive_data`` and ``request_complete``: any other method is unsafe.
459
460        This method handles the WSGI logic. It invokes the application callable
461        in this thread, passing control over to the WSGI application. It then
462        ensures that the data makes it back to the HTTP/2 connection via
463        the thread-safe APIs provided below.
464        """
465        result = wsgi_application(environ, self.start_response)
466
467        try:
468            for data in result:
469                self.write(data)
470        finally:
471            # This signals that we're done with data. The server will know that
472            # this allows it to clean up its state: we're done here.
473            self.write(END_DATA_SENTINEL)
474
475    # The next few methods are called by the WSGI application. Firstly, the
476    # three methods provided by the input stream.
477    def read(self, size=None):
478        """
479        Called by the WSGI application to read data.
480
481        This method is the one of two that explicitly pumps the input data
482        queue, which means it deals with the ``_complete`` flag and the
483        ``END_DATA_SENTINEL``.
484        """
485        # If we've already seen the END_DATA_SENTINEL, return immediately.
486        if self._complete:
487            return b''
488
489        # If we've been asked to read everything, just iterate over ourselves.
490        if size is None:
491            return b''.join(self)
492
493        # Otherwise, as long as we don't have enough data, spin looking for
494        # another data chunk.
495        data = b''
496        while len(data) < size:
497            try:
498                chunk = next(self)
499            except StopIteration:
500                break
501
502            # Concatenating strings this way is slow, but that's ok, this is
503            # just a demo.
504            data += chunk
505
506        # We have *at least* enough data to return, but we may have too much.
507        # If we do, throw it on a buffer: we'll use it later.
508        to_return = data[:size]
509        self._temp_buffer = data[size:]
510        return to_return
511
512    def readline(self, hint=None):
513        """
514        Called by the WSGI application to read a single line of data.
515
516        This method rigorously observes the ``hint`` parameter: it will only
517        ever read that much data. It then splits the data on a newline
518        character and throws everything it doesn't need into a buffer.
519        """
520        data = self.read(hint)
521        first_newline = data.find(b'\n')
522        if first_newline == -1:
523            # No newline, return all the data
524            return data
525
526        # We want to slice the data so that the head *includes* the first
527        # newline. Then, any data left in this line we don't care about should
528        # be prepended to the internal buffer.
529        head, tail = data[:first_newline + 1], data[first_newline + 1:]
530        self._temp_buffer = tail + self._temp_buffer
531
532        return head
533
534    def readlines(self, hint=None):
535        """
536        Called by the WSGI application to read several lines of data.
537
538        This method is really pretty stupid. It rigorously observes the
539        ``hint`` parameter, and quite happily returns the input split into
540        lines.
541        """
542        # This method is *crazy inefficient*, but it's also a pretty stupid
543        # method to call.
544        data = self.read(hint)
545        lines = data.split(b'\n')
546
547        # Split removes the newline character, but we want it, so put it back.
548        lines = [line + b'\n' for line in lines]
549
550        # Except if the last character was a newline character we now have an
551        # extra line that is just a newline: pull that out.
552        if lines[-1] == b'\n':
553            lines = lines[:-1]
554        return lines
555
556    def start_response(self, status, response_headers, exc_info=None):
557        """
558        This is the PEP-3333 mandated start_response callable.
559
560        All it does is store the headers for later sending, and return our
561        ```write`` callable.
562        """
563        if self._headers_emitted and exc_info is not None:
564            raise exc_info[1].with_traceback(exc_info[2])
565
566        assert not self._response_status or exc_info is not None
567        self._response_status = status
568        self._response_headers = response_headers
569
570        return self.write
571
572    def write(self, data):
573        """
574        Provides some data to write.
575
576        This function *blocks* until such time as the data is allowed by
577        HTTP/2 flow control. This allows a client to slow or pause the response
578        as needed.
579
580        This function is not supposed to be used, according to PEP-3333, but
581        once we have it it becomes quite convenient to use it, so this app
582        actually runs all writes through this function.
583        """
584        if not self._headers_emitted:
585            self._emit_headers()
586        event = self._protocol.data_for_stream(self.stream_id, data)
587        event.wait()
588        return
589
590    def _emit_headers(self):
591        """
592        Sends the response headers.
593
594        This is only called from the write callable and should only ever be
595        called once. It does some minor processing (converts the status line
596        into a status code because reason phrases are evil) and then passes
597        the headers on to the server. This call explicitly blocks until the
598        server notifies us that the headers have reached the network.
599        """
600        assert self._response_status and self._response_headers
601        assert not self._headers_emitted
602        self._headers_emitted = True
603
604        # We only need the status code
605        status = self._response_status.split(" ", 1)[0]
606        headers = [(":status", status)]
607        headers.extend(self._response_headers)
608        event = self._protocol.send_response(self.stream_id, headers)
609        event.wait()
610        return
611
612    # These two methods implement the iterator protocol. This allows a WSGI
613    # application to iterate over this Stream object to get the data.
614    def __iter__(self):
615        return self
616
617    def __next__(self):
618        # If the complete request has been read, abort immediately.
619        if self._complete:
620            raise StopIteration()
621
622        # If we have data stored in a temporary buffer for any reason, return
623        # that and clear the buffer.
624        #
625        # This can actually only happen when the application uses one of the
626        # read* callables, but that's fine.
627        if self._temp_buffer:
628            buffered_data = self._temp_buffer
629            self._temp_buffer = b''
630            return buffered_data
631
632        # Otherwise, pull data off the queue (blocking as needed). If this is
633        # the end of the request, we're done here: mark ourselves as complete
634        # and call it time. Otherwise, open the flow control window an
635        # appropriate amount and hand the chunk off.
636        chunk, chunk_size = self._received_data.get()
637        if chunk is END_DATA_SENTINEL:
638            self._complete = True
639            raise StopIteration()
640
641        # Let's talk a little bit about why we're opening the flow control
642        # window *here*, and not in the server thread.
643        #
644        # The purpose of HTTP/2 flow control is to allow for servers and
645        # clients to avoid needing to buffer data indefinitely because their
646        # peer is producing data faster than they can consume it. As a result,
647        # it's important that the flow control window be opened as late in the
648        # processing as possible. In this case, we open the flow control window
649        # exactly when the server hands the data to the application. This means
650        # that the flow control window essentially signals to the remote peer
651        # how much data hasn't even been *seen* by the application yet.
652        #
653        # If you wanted to be really clever you could consider not opening the
654        # flow control window until the application asks for the *next* chunk
655        # of data. That means that any buffers at the application level are now
656        # included in the flow control window processing. In my opinion, the
657        # advantage of that process does not outweigh the extra logical
658        # complexity involved in doing it, so we don't bother here.
659        #
660        # Another note: you'll notice that we don't include the _temp_buffer in
661        # our flow control considerations. This means you could in principle
662        # lead us to buffer slightly more than one connection flow control
663        # window's worth of data. That risk is considered acceptable for the
664        # much simpler logic available here.
665        #
666        # Finally, this is a pretty dumb flow control window management scheme:
667        # it causes us to emit a *lot* of window updates. A smarter server
668        # would want to use the content-length header to determine whether
669        # flow control window updates need to be emitted at all, and then to be
670        # more efficient about emitting them to avoid firing them off really
671        # frequently. For an example like this, there's very little gained by
672        # worrying about that.
673        self._protocol.open_flow_control_window(self.stream_id, chunk_size)
674
675        return chunk
676
677
678def _build_environ_dict(headers, stream):
679    """
680    Build the WSGI environ dictionary for a given request. To do that, we'll
681    temporarily create a dictionary for the headers. While this isn't actually
682    a valid way to represent headers, we know that the special headers we need
683    can only have one appearance in the block.
684
685    This code is arguably somewhat incautious: the conversion to dictionary
686    should only happen in a way that allows us to correctly join headers that
687    appear multiple times. That's acceptable in a demo app: in a productised
688    version you'd want to fix it.
689    """
690    header_dict = dict(headers)
691    path = header_dict.pop(u':path')
692    try:
693        path, query = path.split(u'?', 1)
694    except ValueError:
695        query = u""
696    server_name = header_dict.pop(u':authority')
697    try:
698        server_name, port = server_name.split(u':', 1)
699    except ValueError as e:
700        port = "8443"
701
702    environ = {
703        u'REQUEST_METHOD': header_dict.pop(u':method'),
704        u'SCRIPT_NAME': u'',
705        u'PATH_INFO': path,
706        u'QUERY_STRING': query,
707        u'SERVER_NAME': server_name,
708        u'SERVER_PORT': port,
709        u'SERVER_PROTOCOL': u'HTTP/2',
710        u'HTTPS': u"on",
711        u'SSL_PROTOCOL': u'TLSv1.2',
712        u'wsgi.version': (1, 0),
713        u'wsgi.url_scheme': header_dict.pop(u':scheme'),
714        u'wsgi.input': stream,
715        u'wsgi.errors': sys.stderr,
716        u'wsgi.multithread': True,
717        u'wsgi.multiprocess': False,
718        u'wsgi.run_once': False,
719    }
720    if u'content-type' in header_dict:
721        environ[u'CONTENT_TYPE'] = header_dict[u'content-type']
722    if u'content-length' in header_dict:
723        environ[u'CONTENT_LENGTH'] = header_dict[u'content-length']
724    for name, value in header_dict.items():
725        environ[u'HTTP_' + name.upper()] = value
726    return environ
727
728
729# Set up the WSGI app.
730application_string = sys.argv[1]
731path, func = application_string.split(':', 1)
732module = importlib.import_module(path)
733APPLICATION = getattr(module, func)
734
735# Set up TLS
736ssl_context = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
737ssl_context.options |= (
738    ssl.OP_NO_TLSv1 | ssl.OP_NO_TLSv1_1 | ssl.OP_NO_COMPRESSION
739)
740ssl_context.set_ciphers("ECDHE+AESGCM")
741ssl_context.load_cert_chain(certfile="cert.crt", keyfile="cert.key")
742ssl_context.set_alpn_protocols(["h2"])
743
744# Do the asnycio bits
745loop = asyncio.get_event_loop()
746# Each client connection will create a new protocol instance
747coro = loop.create_server(H2Protocol, '127.0.0.1', 8443, ssl=ssl_context)
748server = loop.run_until_complete(coro)
749
750# Serve requests until Ctrl+C is pressed
751print('Serving on {}'.format(server.sockets[0].getsockname()))
752try:
753    loop.run_forever()
754except KeyboardInterrupt:
755    pass
756
757# Close the server
758server.close()
759loop.run_until_complete(server.wait_closed())
760loop.close()