Tag Archive: Collabora


Video calls in KDE-Telepathy

Well, I think I owed you this one ;) Remember back in 2009 when I was working on KCall as part of the GSoC program? Well, it may have taken 2.5 years more, but I’m now pleased to announce that it’s finally in a ready-to-use state \o/ Don’t expect it to be perfect, of course. It still has a long way to go.

Here is the obligatory screenshot. Me on my desktop, calling myself on my laptop :)

Screenshot of ktp-call-ui

The KDE-Telepathy call-ui in action

A little bit of history

When my GSoC finished in 2009, there were 2 main problems with KCall. The first one was that the bits of the telepathy specification for doing calls (i.e. the “StreamedMedia” channel type) were problematic, not to mention that the API of the telepathy-farsight library, which was the only way to use StreamedMedia, was also weird and it took me too many tries to finally understand it (in late 2010…), which in simple words means that KCall was very unstable beacause it used the API in the wrong way (if there really was a right way to use it…). The second problem was that there was no telepathy integration in the KDE desktop, so KCall would need to have a proper contact list, account manager and other stuff that it shouldn’t have to implement.

In late 2010, the KDE-Telepathy project started evolving and we finally managed to make a first release last summer with the necessary components to use telepathy on the KDE desktop. At about the same time, work began on a new API for doing calls in telepathy, the so-called “Call” channel type, plus telepathy-farstream, the new and enhanced version of telepathy-farsight. It took a little longer than expected, but finally a few weeks ago, thanks to the awesome work of my colleagues at Collabora who engineered the whole thing, the “Call” API and telepathy-farstream were finished and released. Fortunately, last year I had already worked on porting the call-ui to the draft Call API, using the draft telepathy-qt Call bindings that used to be in the telepathy-qt4-yell module. So, now I only had to first update the telepathy-qt bindings to the latest and greatest API specification and then do the same with the call-ui, plus fix a bit the UI, which was way too ugly. And so I did.

The present and the future

The UI is far from perfect at the moment, but the engine seems to work reliably. I have many additions and improvements in mind. However, since I suck at UI design, I’d love having mockups of ideas from people that can actually design UIs. And I’d also love having other people to implement those ideas, since I’m a lazy man… :P (ok, I don’t really mean that). So, if you feel like helping (either way), this is your chance to get involved ;)

The current UI will be included in the next KDE-Telepathy release, 0.4, which is scheduled for next month. Be prepared.

Try it

So, if you can’t wait for the next KDE-Telepathy release and want to try this now, what you need is the latest ktp-call-ui from git master with all of its dependencies. To make a call, simply right click one of your contacts in the contact list and click “audio call” or “video call”. Alternatively, you can do this directly from the text-ui or the contact plasmoid. Note that older versions of those components also have audio/video call buttons, but they will try to start StreamedMedia calls instead, which will fail. Also note that calls require XMPP (jabber, google talk) at the moment, but SIP support is also on its way upstream.

During the past month I’ve been working on a new GStreamer element called qtvideosink.  The purpose of this element is to allow painting video frames from GStreamer on any kind of Qt surface and on any platform supported by Qt. A “Qt surface” can be a QWidget, a QGraphicsItem in a QGraphicsView, a QDeclarativeItem in a QDeclarativeView, and even off-screen surfaces like QImage, QPixmap, QGLPixelBuffer, etc… The initial reason for working on this new element was to support GStreamer video in QML, which is something that many people have asked me about in the past. Until now there was only QtMultimedia supporting this, with some code in phonon being in progress as well. But of course, the main disadvantage with both QtMultimedia and phonon is that although they support this feature with GStreamer as the backend, they don’t allow you to mix pure GStreamer code with their QML video item, therefore they are useless in case you need to do something more advanced using the GStreamer API directly. Hence the need for something new.

My idea with qtvideosink was to implement something that would be a standalone GStreamer element, which would not require the developer to use a specific high level API in order to paint video on QML. In the past I have also written another similar element, qwidgetvideosink, which is basically the same idea, but for QWidgets. After looking at the problem a bit more carefully, I realized that in fact qwidgetvideosink and qtvideosink would share a lot of their internal logic and therefore I could probably do one element generic enough to do both painting on QWidgets and on QML and perhaps more surfaces. And so I did.

I started by taking the code of qtgst-qmlsink, a project that was started by a colleague here at Collabora last year, with basically the same intention, but which was never finished properly. This project was initially based on QtMultimedia’s GStreamer backend. As a first step, I did some major refactoring to clean it up from its QtMultimedia dependencies and to make it an independent GStreamer plugin (as it used to be a library). Then I merged it with qwidgetvideosink, so that they can share the common parts of the code and also wrote a unit test for it. Sadly, the unit test proved something that I was suspecting already: the original QtMultimedia code was quite buggy. But I must say I enjoyed fixing it. It was a good opportunity for me to learn a lot of things on video formats and on OpenGL.

How does it work

First of all, you can create the sink with the standard gst_element_factory_make method (or its equivalent in the various bindings). You will notice that this sink provides two signals, an action signal (a slot in Qt terminology) called “paint” and a normal signal called “update”. “update” is emitted every time the sink needs the surface to be repainted. This is meant to be connected directly to QWidget::update() or QGraphicsItem::update() or something similar. The “paint” slot takes a QPainter pointer and a rectangle (x, y, width, height as qreals) as its arguments and paints the video inside the given rectangle using the given painter. This is meant to be called from the widget’s paint event or the graphics item’s paint() function. So, all you need to do is to take care of those two signals and qtvideosink will do everything else.

Getting OpenGL into the game

You may be wondering how this sink does the actually painting. Using QPainter, using OpenGL or maybe something else? Well, there are actually two variants of this video sink. The first one, qtvideosink, just uses QPainter. It is able to handle only RGB data (only a subset of the formats that QImage supports) and does format conversion and scaling in software. The second one, however, qtglvideosink, uses OpenGL/OpenGLES with shaders. It is able to handle both RGB and YUV formats and does format conversion and scaling in hardware. It is used in exactly the same way as qtvideosink, but it requires a QGLContext pointer to be set on its “glcontext” property before its state is set to READY. This of course means that the underlying surface must support OpenGL (i.e. it must be one of QGLWidget, QGLPixelBuffer or QGLFrameBufferObject). To get this working on QGraphicsView/QML, you just need to set a QGLWidget as the viewport of QGraphicsView and use this widget’s QGLContext in the sink.

qtglvideosink uses either GLSL shaders or ARB fragment program shaders if GLSL is not supported. This means it should work on pretty much every GPU/driver combination that exists for linux on both desktop and emebedded systems. In case no shaders are supported, it will fail to change its state to READY and then you can just substitute it with qtvideosink, which is guaranteed to work on all platforms supported by Qt.

qtglvideosink also has an extra feature: it supports the GstColorBalance interface. Color adjustment is done in the shaders together with the format conversion. qtvideosink doesn’t support this, as it doesn’t make sense. Color adjustment would need to be implemented in software and this can be done better by plugging a videobalance element before the sink. No need to duplicate code.

So, which variant to use?

If you are interested in painting video on QGraphicsView/QML, then qtglvideosink is the best choice of all sinks. And if for any reason the system doesn’t support OpenGL shaders, qtvideosink is the next choice. Now if you intend to paint video on normal QWidgets, it is best to use one of the standard GStreamer sinks for your platform, unless you have a reason not to. QWidgets can be transformed to native system windows by calling their winId() method and therefore any sink that implements the GstXOverlay interface can be embedded in them. On X11 for example, xvimagesink is the best choice. However, if you need to do something more tricky and embedding another window doesn’t suit you very well, you could use qtglvideosink in a QGLWidget (preferrably) or qtvideosink / qwidgetvideosink on a standard QWidget.

Note that qwidgetvideosink is basically the same thing as qtvideosink, with the difference that it takes a QWidget pointer in its “widget” property and handles everything internally for painting on this widget. It has no signals. Other than that, it still does painting in software with QPainter, just like qtvideosink. This is just there to keep compatibility with code that may already be using it, as it already exists in QtGStreamer 0.10.1.

This is actually 0.10 stuff… What about GStreamer 0.11/1.0?

Well, if you are interested in 0.11, you will be happy to hear that there is already a partial 0.11 port around. Two weeks ago I was at the GStreamer 1.0 hackfest at Malaga, Spain, and one of the things I did there was porting qtvideosink to 0.11. I must say the port was quite easy to do. However, last week I added some more stuff in the 0.10 version that I haven’t ported yet to 0.11. I’ll get to that soon, it shouldn’t take long.

Try it out

The code lives in the qt-gstreamer repository. The actual video sinks are independent from the qt-gstreamer bindings, but qt-gstreamer itself has some helper classes for using them. Firstly there is QGst::Ui::VideoWidget, a QWidget subclass which will accept qtvideosink, qtglvideosink and qwidgetvideosink just like any other video sink and will transparently do all the required work to paint the video in it. Secondly, there is QGst::Ui::GraphicsVideoWidget and QGst::Ui::GraphicsVideoSurface. Those two are meant to be used together to paint video on a QGraphicsView or QML. You can find more about them at the documentation in graphicsvideosurface.h (this will soon be on the documentation website). Finally, there is a QtGStreamer QML plugin, which exports a “VideoItem” element if you “import QtGStreamer 0.10″. This is also documented in the GraphicsVideoSurface header. All of this will soon be released in the upcoming qt-gstreamer 0.10.2.

QtGStreamer 0.10.1

This weekend I released QtGStreamer 0.10.1, the first stable version of QtGStreamer. This release marks the beginning of the stable 0.10 series of QtGStreamer that will continue for the lifetime of GStreamer 0.10. For those of you that don’t yet know what QtGStreamer is, it is a set of libraries that provide Qt-style C++ bindings for GStreamer, plus extra helper classes and elements for better integration of GStreamer in Qt applications.

I must say thanks a lot to Mauricio, the co-developer of QtGStreamer, who helped me a lot with the design and code, to the GStreamer community, who accepted this project under the GStreamer umbrella with great enthusiasm, to Nokia for sponsoring it, to Collabora for assigning me and Mauricio to work on it and to all those developers who are already using it in their projects and have helped us by providing feedback.

The future

Development of course does not stop here. It just started. We will try to improve the bindings as much as we can by exporting more and more of GStreamer’s functionality, by adding more and more convenience methods/classes and/or gstreamer elements that ease the use of GStreamer in Qt applications and by collecting opinions and ideas from all of you out there that will use this API. This last bit is quite important imho, so, if you have any suggestions to make about things that you don’t like or things that you would like to see implemented, please file a bug to let us know.

Use in KDE

I am quite happy to see that this library already has early adopters in KDE. Apart of course from my telepathy-kde-call-ui (ex kcall), which is the “father” of QtGStreamer, QtGStreamer is also used in kamoso, a cheese-like camera app, whose authors, Alex Fiestas and Aleix Pol, have been very patient waiting for me to release QtGStreamer before they release kamoso and have also been very supportive during all this time (thanks!).

Personal thoughts

I must say this project was fun to develop. During development, I learned a lot about C++ that I didn’t know before and I also learned how GObject works, which I must say is quite interesting, although ugly for my taste. Learning more about C++ was my main source of interest from the beginning of the project, and for some period of time I couldn’t even imagine that this project would ever reach here, but I kept coding it for myself. Obviously, I am more than happy now that this finally evolved into something that is also useful for others and has wide acceptance :)

Telepathy KDE Sprint

This weekend I participated in the Telepathy-KDE sprint at Collabora‘s offices in Cambridge. We gathered here to settle things down, make some design decisions, make future plans and start hacking on them. In overall, I think this was quite successful. We now all have a clear plan of what to do and what to aim for in the first release.

Things we did include:

  • We all together discussed the release roadmap. The first release is expected to be around when KDE SC 4.6 will be released, but not as part of KDE SC, since that would require us to merge stuff before the hard feature freeze which is too close and we don’t think we can make it.
  • We all together discussed about the components that we have, what problems each one has, what needs to be done, what are the blocker issues, etc and assigned jobs to everyone.
  • Olli gave talks about telepathy-qt4 and the suggested git workflow that we are going to follow as soon as we migrate to git, which will happen as soon as the KDE admins allow us.
  • Dario, Olli and Andre together hacked on and reviewed the telepathy-qt4 cmake branch, which seems to be in quite a good shape now for upstream inclusion.
  • Olli, Daniele and me discussed about code from our projects that could be upstreamed in telepathy-qt4 and in fact Olli did collect some nice ideas from us.
  • George and Sebastian looked at some nepomuk stuff that needed fixing and fixed them (it’s just nepomuk stuff for me, I have no idea what’s going on on that level yet :P).
  • Andre, Sebastian, Will, Dario and David did some UI mockups on the whiteboard that looked pretty cool. Andre later even did a quick QML mockup of the contact list.
  • Olli, George and me, with some input from Simon McVittie today discussed about how the approver (the thing that pops up asking you if you want to accept or reject a chat or a call or something similar) should be and behave. We will probably have to do some telepathy specification additions for this one, but it’s good that we finally came to a sensible conclusion on a problem that has actually been troubling some of us for a long time.
  • Dominik and David hacked on the chat window, doing some cool things like adding support for loading Adium themes.
  • Lots of other cool stuff, including eating a lot of pizza and burgers, drinking a lot of beer, socializing with the Collabora people, etc… :P

I think that’s all I had to say for now, stay tuned for more news about Telepathy-KDE :)

PS: We also have a group photo that can be found here, taken by Sjoerd Simons with Daniele’s camera :)

Follow

Get every new post delivered to your Inbox.