Instead of waiting for the full LLM response and sending it as a single message, stream bot responses to Matrix as they are generated. Paragraphs are delimited by double-newline boundaries, giving users incremental feedback while the model is still thinking. Story: 184_story_stream_bot_responses_to_matrix_on_double_newline_boundaries