In a recent post we walked through the steps required to use Tinybird to analyze Nginx logs. In this one we’ll share how we are using our own product internally to analyze our own Nginx traffic in real-time and automatically, and how you can do it too.

The general architecture of what we'll build here
The general architecture of what we'll build here

To follow along, you’ll need 3 things:

  • A Tinybird account. Sign in if you don’t have one yet.
  • The code from this GitHub repo: it contains a Tinybird project Data source configurations and Pipes that will work out of the box.
  • tbtail, a tool to stream your logs to Tinybird, installed in the machines where your Nginx logs are generated.

With that, you can quickly get a sense of what’s going on with all your traffic in real-time, as well as examples for you to start slicing and dicing your logs in however other ways you want.

We’re still in private beta, so after you sign in with Google, Github or Microsoft authentication, your account won’t be active. Email us at and we’ll activate it for you.

Run these commands to clone the data project locally, replicate it on Tinybird and have everything set and ready for tbtail to start sending logs to your account:

The last thing is installing tbtail. The details on how to do it are in its repo. We published deb and bin packages, and you can also compile it from source.

After it’s installed, you just need to run this command in the machine where the Nginx logs are created to send logs to Tinybird automatically.

Then, if you go to your dashboard, you’ll see a new pipe called query_grouped_requests. It defines an endpoint that lets you query your aggregated data. In the Pipe, you can click on the green “View API” button on the top right corner of the page to see its live documentation.

For example, this is the live docs page of our endpoint. It’s created automatically when you publish an endpoint with Tinybird and it lets you get results in CSV as well as in JSON.

With it, we created the charts below. They show traffic for a part of our infrastructure and how it evolves over time:

Requests count evolution, including status codes

Total data received (in the body of each request)

Statistics on data received (measured by the size of the body or each request)

Unique IPs per date

Requests per method