Performance of 2d Plotting

Hello all,

Does anyone have experience with performant real time 2d plotting?

Current application:

I have a simple test application that I’ve been tinkering with trying to evaluate the performance of 2d plotting. This uses Qt Quick with a QQuickFramebufferObject (similar to https://gist.github.com/nocnokneo/c3fb01bb7ecaf437f7d6).

Each plot has it’s own framebuffer and vtkGenericOpenGLRenderWindow.

The plot is just filling data into a table with a QTimer and sine wave. I have the plots rendering at 5Hz, with the data at 100Hz (back fill every 200ms).

The performance is pretty poor, although this is a somewhat naive implementation (still new to vtk).

VTune indicates:

  • 10% of the time is spent in vtkContext2D::DrawPoly. Mostly in vector::operator[] when trying to build the VBO.
  • 6% of the time is vtkFreeTypeTools::GetBoundingBox for the sliding x-axis.
  • 5% of the time is in malloc: mostly for DrawPoly, some for FreeType

It seems each plot is redrawing all of it’s line every frame instead of some sort of transform. Any hints towards more performant real-time plotting would be greatly appreciated!

  • Josh

What do you mean by poor performance? What do you find to be slow? How many rows do you have in your table?

In 3D Slicer, we use ctkVTKChartView to display VTK plots in real-time and rendering only starts to visibly lag (fall below 20-30fps) when you display tens of thousands of data points. But even then it may be because of other things that the application does and not due to VTK.

When drawing an XY chart consisting of 25000 points, 65% time is spent in vtkContext2D::DrawPoly. Nothing else really stands out in profiling.

The call to Render takes ~25ms per plot. Since the rendering is based on an update queue, the queue can start backing up with events since rendering all 15 plots takes ~400ms and they are trying to update at 200 ms intervals.

This also inhibits other things in the main event queue from being handled, like hover animations rendering, mouse events, etc.

I have the table removing rows if the size gets above 1000. Note that each plot gets it’s own table and render window.

Table updating code (this is just testing code):

void Plot::timerEvent(QTimerEvent* event)
{
    Q_UNUSED(event);

    static const quint32 sampleRateMs = 10;

    for (qint32 i = (DRAW_TIME_MS / sampleRateMs); i >= 0; i--) {
        vtkSmartPointer<vtkVariantArray> row = vtkSmartPointer<vtkVariantArray>::New();
        row->SetNumberOfValues(2);

        quint64 sampleTime = QDateTime::currentMSecsSinceEpoch() - (i * sampleRateMs) - _startTime;
        row->SetValue(0, sampleTime);
        row->SetValue(1, std::sin(static_cast<double>(sampleTime) / 1000));
        _table->InsertNextRow(row);

        if (_table->GetNumberOfRows() > 1000) {
            _table->RemoveRow(0);
        }
    }

    static bool first = true;
    if (first) {
        vtkPlot* line = _chart->AddPlot(vtkChart::LINE);
        line->SetInputData(_table, 0, 1);
        line->SetColor(0, 100, 200, 255);
        line->SetWidth(1.0);
    }

    double minX = _table->GetRow(0)->GetValue(0).ToDouble();

    _chart->GetAxis(1)->SetRange(minX, minX + 20000);
    _win->Render();
}

EDIT: Thanks for the link, I’ll check out the repo and see if I’m missing any low-hanging optimizations.

Do you mean to create a new line plot every timerEvent call:

static bool first = true; if (first) { vtkPlot* line = _chart->AddPlot(vtkChart::LINE); line->SetInputData(_table, 0, 1); line->SetColor(0, 100, 200, 255); line->SetWidth(1.0); }

LOL, let me fix that first and see. Was experimenting :frowning:.

It may be a significant performance hit (and may lead to memory fragmentation) that you keep allocating/freeing memory on the heap hundreds of times per second. Instead of calling InsertNextRow/RemoveRow, keep the table size constant and just copy/update the values.

Thanks for the suggestion, I wasn’t sure if the table had to be in sorted order. I will implement it as a circluar buffer then fix the extra AddAxis the Mike mentioned and report back!

With the suggested changes, I’m down to ~18ms per plot. So 270ms to render all plots.

I’ve also experimented with turning the labels/ticks off. If I turn both the x-axis and y-axis labels off, the average time to plot is ~6ms. If I turn both x and y axis ticks off, I get down to ~2ms.

Ideally, I would like to get each plot render below 5ms per plot. Labels and ticks are probably necessary for my use case.

New test code:

Plot::Plot(vtkRenderWindow* win, QObject* parent)
    : QObject(parent)
    , _win(win)
    , _table(vtkSmartPointer<vtkTable>::New())
    , _xArr(vtkSmartPointer<vtkFloatArray>::New())
    , _yArr(vtkSmartPointer<vtkFloatArray>::New())
    , _view(vtkSmartPointer<vtkContextView>::New())
    , _chart(vtkSmartPointer<vtkChartXY>::New())
    , _index(0)
{
    _xArr->SetName("X Axis");
    _table->AddColumn(_xArr);

    _yArr->SetName("Y Axis");
    _table->AddColumn(_yArr);

    _table->SetNumberOfRows(1000);

    _view->SetRenderWindow(_win);
    _view->GetRenderer()->SetBackground(1.0, 1.0, 1.0);

    _view->GetScene()->AddItem(_chart);
    _chart->GetAxis(0)->SetBehavior(vtkAxis::FIXED);
    _chart->GetAxis(0)->SetRange(-1, 1);
    _chart->GetAxis(0)->SetNumberOfTicks(0);
    _chart->GetAxis(0)->SetLabelsVisible(false);

    _chart->GetAxis(1)->SetBehavior(vtkAxis::FIXED);
    _chart->GetAxis(1)->SetNotation(vtkAxis::STANDARD_NOTATION);
    _chart->GetAxis(1)->SetNumberOfTicks(0);
    _chart->GetAxis(1)->SetRange(0, 20000);
    _chart->GetAxis(1)->SetLabelsVisible(false);

    vtkPlot* line = _chart->AddPlot(vtkChart::LINE);
    _chart->GetPlot(0)->SetInputData(_table, 0, 1);
    line->SetColor(0, 0, 255, 255);
    line->SetWidth(1.0);

    _startTime = QDateTime::currentMSecsSinceEpoch();
    startTimer(DRAW_TIME_MS);
}

void Plot::timerEvent(QTimerEvent* event)
{
    Q_UNUSED(event);

    double minX = std::max(_table->GetRow(_index)->GetValue(0).ToDouble(), 0.0);

    static const quint32 sampleRateMs = 10;

    for (qint32 i = (DRAW_TIME_MS / sampleRateMs); i >= 0; i--) {
        vtkSmartPointer<vtkVariantArray> row = vtkSmartPointer<vtkVariantArray>::New();
        row->SetNumberOfValues(2);

        quint64 sampleTime = QDateTime::currentMSecsSinceEpoch() - (i * sampleRateMs) - _startTime;
        row->SetValue(0, sampleTime);
        row->SetValue(1, std::sin(static_cast<double>(sampleTime) / 1000));
        _table->SetRow(_index, row);
        _index = (_index + 1) % 1000;
    }

    _table->Modified();
    _chart->GetAxis(1)->SetRange(minX, minX + 20000);
    _win->Render();
}

So to narrow the scope of this thread, I guess the question becomes:

How do I speed up performance of tick/label rendering?

The rendering time seems to increase linearly with the number of ticks. Roughly 1-2ms per tick.

Label rendering is known to be very slow. Do you have speed problem if you only show ticks and not labels?

Showing ticks only, 6 per axis, is ~4ms render time. So, no, ticks slow it down, but very minimally. It’s basically just labels.

Labels are required for my use case though, to build context about the data (not necessarily within some known bounds).

Is there any way to speed label rendering up?

If I use a monospace font, the calculations for text bounds could be much quicker.

Also, the labels could be transformed instead of reallocated every frame.

Yes, you could implement caching of the rendered labels, then they could be quickly redrawn at their new positions.

In some actors, there is an option to use Qt for rendering labels (see vtkQtStringToImage), which might be faster or higher quality.