tag:blogger.com,1999:blog-29960354766496812242024-03-13T09:26:03.995-07:00Robert Knight's BlogAnonymoushttp://www.blogger.com/profile/18355855797731147055noreply@blogger.comBlogger28125tag:blogger.com,1999:blog-2996035476649681224.post-9562020881259893902013-11-21T08:29:00.000-08:002013-11-21T08:30:40.469-08:00Understanding the QWidget layout flowWhen layouts in a UI are not behaving as expected or performance is poor, it can be helpful to have a mental model of the layout process in order to know where to start debugging. For web browsers there are some <a href="http://taligarsiel.com/Projects/howbrowserswork1.htm#Layout">good resources</a> which provide a description of the process at different levels. The <a href="http://qt-project.org/doc/qt-4.8/layout.html">layout documentation</a> for Qt describes the various layout facilities that are available but I haven't found a detailed description of the flow, so this is my attempt to explain what happens when a layout is triggered that ultimately ends up with the widgets being resized and repositioned appropriately.<br />
<br />
<ol>
<li>A widget's contents are modified in some way that require a layout update. Such changes can include:</li>
<ul>
<li>Changes to the content of the widget (eg. the text in a label, content margins being altered)</li>
<li>Changes to the sizePolicy() of the widget</li>
<li>Changes to the layout() of the widget, such as new child widgets being added or removed</li>
</ul>
<li>The widget calls QWidget::updateGeometry() which then performs several steps to trigger a layout:</li>
<ol>
<li>It invalidates any cached size information for the QWidgetItem associated with the widget in the parent layout.</li>
<li>It recursively climbs up the widget tree (first to the parent widget, then the grandparent and so on), invalidating that widget's layout. The process stops when we reach a widget that is a top level window or doesn't have its own layout - we'll call this widget the top-level widget, though it might not actually be a window.</li>
<li>If the top-level widget is not yet visible, then the process stops and layout is deferred until the widget is due to be shown.</li>
<li>If the top-level widget is shown, a LayoutRequest event is posted <i>asynchronously</i> to the top-level widget, so a layout will be performed on the next pass through the event loop.</li>
<li>If multiple layout requests are posted to the same top-level widget during a pass through the event loop, they will get compressed into a single layout request. This is similar to the way that multiple QWidget::update() requests are compressed into a single paint event.</li>
</ol>
<li>The top-level widget receives the LayoutRequest event on the next pass through the event loop. This can then be handled in one of two ways:</li>
<ol>
<li>If the widget has a layout, the layout will intercept the LayoutRequest event using an event filter and handle it by calling QLayout::activate()</li>
<li>If the widget does not have a layout, it may handle the LayoutRequest event itself and manually set the geometry of its children.</li>
</ol>
<li>When the layout is activated, it first sets the fixed, minimum and/or maximum size constraints of the widget depending on QLayout::sizeConstraint(), using the values calculated by QLayout::minimumSize(), maximumSize() and sizeHint(). These functions will recursively proceed down the layout tree to determine the constraints for each item and produce a final size constraint for the whole layout. This may or may not alter the current size of the widget.</li>
<li>The layout is then asked to resize its contents to fit the current size of the widget using QLayout::setGeometry(widget->size()). The specific implementation of the layout - whether it is a box layout, grid layout or something else then lays out its child items to fit this new size.</li>
<li>For each item in the layout, the QLayoutItem::setGeometry() implementation will typically ask the item for various size parameters (minimum size, maximum size, size hint, height for width) and then decide upon a final size and position for the item. It will then invoke QLayoutItem::setGeometry() to update the position and size of the widget.</li>
<li>If the layout item is itself a layout or a widget, steps 5-6 proceed recursively down the tree, updating all of the items whose constraints have been modified.</li>
</ol>
<div>
A layout update is an expensive operation, so there are a number of steps taken to avoid unnecessary re-layouts:</div>
<div>
<ul>
<li>Multiple layout update requests submitted in a single pass through the event loop are coalesced into a single update</li>
<li>Layout updates for widgets that are not visible and layouts that are not enabled are deferred until the widget is shown or the layout is re-enabled</li>
<li>The QLayoutItem::setGeometry() implementations will typically check whether the current and new geometry differ or whether they have been invalidated in some way before performing an update. This prunes parts of the widget tree from the layout process which have not been altered.</li>
<li>The QWidgetItem associated with a widget in a layout caches information which is expensive to calculate, such as sizeHint(). This cached data is then returned until the widget invalidates it using QWidget::updateGeometry()</li>
</ul>
<div>
<br /></div>
</div>
<div>
Given this flow, there are a few things to bear in mind to avoid unexpected behaviour:</div>
<div>
<ul>
<li>Qt provides multiple ways to set constraints such as fixed and minimum sizes.</li>
<ul>
<li>Using QWidget::setFixedSize(), setMinimumSize() or setMaximumSize(). This is simple and available whether you control the widget or not.</li>
<li>Implementing the sizeHint() and minimumSizeHint() functions and using QWidget::setSizePolicy() to determine how these hints are handled by the layouts. If you control the widget, it is almost always preferable to use sizePolicy() together with the layout hints.</li>
</ul>
<li>The layout management documentation suggests that handling LayoutRequest events in QWidget::event() is an alternative to implementing a custom layout. A potential problem with this is that LayoutRequest events are delivered asynchronously on the next pass through the event loop. If your widget is likely to update its own geometry in response to the LayoutRequest event then this can trigger layout flicker where several passes through the event loop occur before the layout process is fully finished. Each of the intermediate stages will flicker on screen briefly, as the event loop may process a paint event on each pass as well as the layout update, which looks poor. So if you need a custom layout, subclassing QLayout/QLayoutItem is the recommended approach unless you're sure that your widget will always be used as a top-level widget.</li>
</ul>
<div>
<br /></div>
</div>
Anonymoushttp://www.blogger.com/profile/18355855797731147055noreply@blogger.com1tag:blogger.com,1999:blog-2996035476649681224.post-55629951108210860052013-11-04T07:01:00.000-08:002013-11-04T07:01:38.239-08:00Improving build times of large Qt appsMy colleagues and I spent time recently improving build times of a largish Qt app (<a href="http://www.mendeley.com/">Mendeley</a>) and its associated test suite. I'm sharing some notes here in case anyone else finds them useful. Most of the steps here fall under one of a few basic ideas:<br />
<ul>
<li>Measure first</li>
<li>Do more in parallel</li>
<li>Work around the inefficiencies of C++ compilation</li>
<li>Use faster tools</li>
<li>Do less disk I/O</li>
</ul>
All of these steps can improve build times on all platforms, but those that reduced the amount of I/O during builds were especially effective on Windows.<br />
<br />
<h4>
Measure first</h4>
<div>
<br /></div>
When we started out, I expected that running the tests would be consuming most of our CI system's cycle time. In the end it turned out that the largest bottleneck was actually just building the code on Windows, which was taking 3x as long as Linux (30 mins for a fresh build vs 10 on Linux). The unit tests did take longer to run on Windows by a factor of 2 (20mins total vs 10 on Linux).<br />
<br />
<h4>
Use those cores!</h4>
<div>
<br /></div>
One of simplest things to address is usually taking advantage of multiple cores on your system. The '-j' argument to <i>make</i> sets the number of parallel jobs. The <a href="http://stackoverflow.com/questions/2499070/gnu-make-should-j-equal-number-the-number-of-cpu-cores-in-a-system">optimal number</a> will depend on a number of factors. Setting the value to the number of cores is a reasonable starting point, but check what happens with different values.<br />
<br />
When running unit tests, use the option in the driver to run multiple tests in parallel. ctest supports a '-j' argument for this as well. An important thing to remember before enabling this is that your tests need to be set up so that they can't interfere with one another. This means not trying to use the same resources (files, settings keys, I/O ports, web service accounts etc.) at the same time. Some tests might be easier to isolate than others in which case you can split your test suite into subsets and only run some of the subsets in parallel. ctest has a facility for assigning labels to tests using.<br />
<br />
set_tests_properties( $TEST_TARGET<test target=""> PROPERTIES LABELS $LABELS<labels>)</labels></test><br />
<br />
CTest then has a set of command-line arguments that can be used to run only tests with labels matching a certain pattern, or exclude tests with labels matching a certain pattern. This can then be used to run only a subset of tests which are known not to interfere with one another concurrently.<br />
<br />
<h4>
Working around C++ compilation inefficiency</h4>
<div>
<br /></div>
When the compiler encounters an #include statement, it effectively copies and pastes the content into the current source file. The resulting output that the compiler has to lex, parse and understand the semantics of ends up being tens of thousands of lines long in the case of a typical source file in a Qt app. The more you use code-heavy headers such as the C++ standard library or Boost, the worse this gets. This is incredibly inefficient and means that much of your build time can be spent re-parsing the same source code over and over. This is compounded by the complexity of parsing <complete id="goog_1252103083">C++ in the first place.</complete><br />
<br />
Consider<a href="https://gist.github.com/robertknight/6962650"> this very simple list view app</a>. There are only 15 lines of actual code in the example but the preprocessed output, which can be produced by passing the<i> -E</i> flag to gcc, is just under 43,000 lines of actual code (as determined by <a href="http://www.dwheeler.com/sloccount/">sloccount</a>) or just under 60,000 lines when C++11 mode is enabled (using the '<i>-std=c++0x</i>' flag).<br />
<br />
In a language with a proper package/module system (eg. C#, Go or many other languages), processing an import only involves reading some metadata from the already-compiled module rather than re-parsing everything. <a href="http://clang.llvm.org/docs/Modules.html">A proper module system for C++</a> is in the works but is still some way off. In the meantime, there are <strike>hacks</strike> workarounds available which can help considerably.<br />
<br />
<h3>
Precompiled headers</h3>
<div>
<br /></div>
<a href="http://msdn.microsoft.com/en-us/library/szfdksca.aspx">MSVC</a>, <a href="http://gcc.gnu.org/onlinedocs/gcc/Precompiled-Headers.html">GCC</a> and <a href="http://clang.llvm.org/docs/UsersManual.html#precompiled-headers">Clang</a> all have good support for precompiled headers. The use of precompiled headers is even more important now since the preprocessed output of many of the #includes from the C++ standard library <a href="https://plus.google.com/104092308458481627495/posts/hxbRxEWHwMa">grows considerably in size</a> when C++11 is enabled. Note that under MSVC on Windows, C++11 mode is always enabled.<br />
<br />
With the small example above, creating a precompiled header which includes just the <i>QStringList</i> header<algorithm> reduces compile times for the main .cpp file on my system from ~1.1s to ~0.7s (about 35%). This sounds modest but adds up by the time you have a project with hundreds of source files. Even in a small project with just a few dozen source files I think it is worthwhile.</algorithm><br />
<br />
The steps to enable precompiled headers will depend on the build system you are using. With qmake, this is <a href="http://stackoverflow.com/questions/13710243/how-to-use-precompiled-headers-in-qt-project">relatively simple</a>. CMake lacks a simple built-in command for this but there are <a href="https://gist.github.com/larsch/573926">samples online</a> that we used as a basis.<br />
<br />
A downside of precompiled headers is that you are effectively automatically #including an extra header with every file that you build, so a file may compile in a build with precompiled headers but fail to build in one without if the file is missing necessary #includes that are supplied by the precompiled header when enabled. If you're running a CI system is therefore useful to have at least one regular build that is not using precompiled headers.<br />
<br />
<h3>
Unity builds</h3>
<div>
<br /></div>
A <a href="http://buffered.io/posts/the-magic-of-unity-builds">unity build</a> involves creating a single source file which #include's all the source files from a particular module or the whole project and compiling that at once. The main caveat with this approach is that variables and functions declared within an implementation .cpp file may now clash with declarations from other source files - since they are now being compiled together as a single source file instead of separately.<br />
<br />
<h4>
More efficient build tools</h4>
<div>
<br /></div>
<div>
Part of the reason for a gradual creep in built times as a project grows is due to scaling issues with build tools. The amount of time taken for a do-nothing build (ie. running 'make' when everything is up to date) grows noticeably with cmake + make as the total number of targets to build increases. Fortunately for us, engineers on Google Chrome ran into this problem harder and long before we did so they have produced some helpful replacements for the standard tools:</div>
<ul>
<li>The <a href="http://martine.github.io/ninja/">Ninja build system</a> is designed to be faster, especially for incremental builds where little changed. Recent versions of CMake have built-in support for generating Ninja build files (use '<i>cmake -G Ninja</i>' to generate Ninja build files). The difference in build speed for incremental builds where little changed is decent on Mac and Linux but very noticeable on Windows compared to nmake. Prior to Ninja, Qt developers also created <a href="http://blog.qt.digia.com/blog/2009/03/27/speeding-up-visual-c-qt-builds/">jom</a> as a faster alternative to make.</li>
<li>On Linux, the <a href="http://en.wikipedia.org/wiki/Gold_%28linker%29">Gold linker</a> is faster than the traditional ld linker and can often be used as a drop-in replacement.</li>
</ul>
<br />
<ul>
</ul>
<h4>
Reducing total disk I/O</h4>
<div>
<br /></div>
Disk I/O is very slow, reducing the total amount of I/O (especially random I/O) required during a build can improve overall build times substantially. Anecdotally, this is especially true on Windows where reducing the total amount of I/O performed during a clean build had the largest impact in terms of achieving parity between build + test times on Windows and build times on Linux and Mac.<br />
<h3>
Use faster hardware</h3>
<div>
<br /></div>
It always feels a little dirty to solve software inefficiency by throwing faster hardware at the problem but if you can afford it, it can be a quick win.<br />
<ul>
<li>Adding more memory will reduce the likelihood of the build system swapping.</li>
<li>A good SSD drive will speed up disk I/O, especially for operations which do a lot of random I/O.</li>
<li>If you have a lot of memory spare you can create a RAMDisk and do the build on that.</li>
</ul>
I haven't compared the impact of an SSD vs. a standard IDE drive myself, this advice comes mostly from <a href="http://www.chromium.org/developers/how-tos/build-instructions-windows">Chromium developers build notes</a>.<br />
<ul>
</ul>
<h3>
Reducing debug info size</h3>
<div>
<br /></div>
In debug builds, a large proportion of the total size of data read/written from disk is typically debug information. When doing local development, this information is usually useful. When generating builds on a continuous integration system that will purely be used for automated tests, this is less so.<br />
<ul>
<li>All compilers (MSVC, gcc, clang) have switches to control the amount of debug info that is generated. With <a href="http://gcc.gnu.org/onlinedocs/gcc/Debugging-Options.html">gcc</a>/<a href="http://clang.llvm.org/docs/UsersManual.html#controlling-size-of-debug-information">clang</a> these are controlled by the -gXYZ switches.</li>
</ul>
<ul>
<li>Recent versions of clang have a -gline-tables-only option which considerably <a href="http://clang.llvm.org/docs/UsersManual.html#controlling-size-of-debug-information">reduces the amount of debug info</a> that is generated.</li>
</ul>
<br />
<h3>
Generating fewer binaries for tests</h3>
<div>
<br /></div>
For every binary that is generated as part of a project, there are a number of overheads:<br />
<ul>
<li>Each binary will add a number of additional targets to the build system</li>
<li>Each binary requires a linking step - which can be memory and I/O intensive.</li>
<li>Each binary generated requires reading/writing additional data to disk. The cost of this depends on how large the generated binary is and how many files need to be processed to assemble the final binary.</li>
</ul>
In our case, we are using the <a href="http://qt-project.org/doc/qt-5.1/qttestlib/qtest.html">QTestLib</a> framework for unit tests, which by default encourages the creation of one test class per original class. Each test class is then compiled into a separate binary with a QTEST_MAIN($TEST_CLASS_NAME) macro providing the entry point for the test app. This works fine for smaller apps. When a project grows larger however and you have hundreds of test classes, the overhead of linking all of those binaries can add noticeably to the total build time.<br />
<br />
We changed the test builds to produce one test binary per source directory instead of one per test class. This was done by replacing the QTEST_MAIN() macro with a substitute which instead declares a '$TESTCLASS_main()' function and registered it in a global map from test class to init function on startup. All of the test classes are then compiled and linked together with a small stub library which declares the 'int main()' function that reads the name of the test to run from the command-line and calls the corresponding '$TESTCLASS_main()' function, forwarding the other command-line arguments to it. This allows multiple Qt test cases to be linked into a single binary which improves build times in several ways:<br />
<ul>
<li>The number of linking operations during builds was considerably reduced.</li>
<li>The total amount of binary data generated on disk was reduced as code that was previously statically linked into the test binary for each test class is now only linked into a single test binary for each group of tests.</li>
<li>The total number of make steps and targets for the whole project was reduced.</li>
</ul>
On Windows this change shaved 30% off our total build time and the impact on build times of adding a new test case is now greatly reduced.<br />
<br />
<ul>
</ul>
<h3>
Generating smaller binaries</h3>
<div>
<br /></div>
Another way to reduce the size of compiled binaries is to build each module of the app into a shared rather than static library. This is sometimes referred to as a '<a href="http://www.chromium.org/developers/how-tos/component-build">component build</a>'. When there are many executables being generated from the same source code this reduces the amount of work for the linker and the amount of IO by only generating the shared code and associated debug info once when building the shared library/DLL, instead of linking it separately into each binary.<br />
<br />
Note that by doing this you are deferring some of the linking work from build time to runtime and consequently startup will slow down as the number of dynamically loaded libraries increases.<br />
<br />
<h4>
Further reading</h4>
<div>
I hope these notes are useful - please let me know if you have other recommendations in the comments. In the meantime, here are a few notes for existing projects which I found useful background reading:</div>
<br />
<ul>
<li>Notes on accelerating Chromium builds on <a href="http://www.chromium.org/developers/how-tos/build-instructions-windows#TOC-Accelerating-the-build">Windows</a>, <a href="http://code.google.com/p/chromium/wiki/LinuxFasterBuilds">Linux</a> and <a href="http://code.google.com/p/chromium/wiki/MacBuildInstructions">Mac</a> - this doesn't involve Qt but the advice is still quite relevant.</li>
<li>Notes on improving <a href="http://gregoryszorc.com/blog/">Firefox's build system</a>.</li>
<li>An <a href="http://stackoverflow.com/questions/2976630/why-does-go-compile-so-quickly">explanation</a> of how a language designed with build performance in mind differs from C++</li>
</ul>
<br />
<br />
<br />Anonymoushttp://www.blogger.com/profile/18355855797731147055noreply@blogger.com1tag:blogger.com,1999:blog-2996035476649681224.post-92111413143206784802013-06-25T02:18:00.000-07:002013-06-25T02:18:04.502-07:00qt-signal-tools 0.2A new version of the <a href="https://github.com/robertknight/qt-signal-tools">qt-signal-tools library</a> for connecting signals to arbitrary functions is available.<br />
<br />
Changes in this release:<br />
<br />
<ul>
<li>Compatibility with earlier versions of Qt 4. The previous release required Qt 4.8. The current version works with Qt 4.6 and up and possibly older releases as well.</li>
<li>Compatibility with Qt 5. Though the functionality of QtSignalForwarder can be mostly achieved in Qt 5 using the new signal/slot syntax, this may be useful for creating code which can work with either version or for porting.</li>
<li>Performance improvements.</li>
<li><span style="font-family: Courier New, Courier, monospace;">QtSignalForwarder::connectWithSender()</span><span style="font-family: Arial, Helvetica, sans-serif;"> </span><span style="font-family: inherit;">utility</span>, this provides a convenient way to connect a signal to a slot which includes the sender as the first argument. eg.<span style="font-family: Courier New, Courier, monospace;"> connectWithSender(button, SIGNAL(clicked()), form, SLOT(buttonClicked(QPushButton*))) </span></li>
</ul>
<br />
<br />
The performance improvements come from changing the way that the hidden proxy object which forwards the signal determines where the signal came from. The previous implementation worked in the same way as <span style="font-family: Courier New, Courier, monospace;">QSignalMapper</span> by using <span style="font-family: "Courier New",Courier,monospace;">QObject::sender()</span> and <span style="font-family: "Courier New",Courier,monospace;">QObject::senderSignalIndex()</span> to determine which signal the proxy was handling. These two functions have some overhead though. Both not only lock a mutex but there is also a linear slowdown as the number of senders connected to a given receiver increases. The previous version of QtSignalForwarder therefore created a new proxy object for each sender.<br />
<br />
So I looked for an alternative way to identify the caller of the slot. When a signal -> slot connection is established, Qt internally maps the arguments to the SIGNAL() and SLOT() macros to integer method IDs. The details of the connection, including the receiver, connection type and method IDs of the signal/slot are then stored in a connection object and added to a list maintained by the sender. When a signal is emitted, Qt invokes the <span style="font-family: Courier New, Courier, monospace;">qt_metacall()</span>function provided by the receiver's <span style="font-family: Courier New, Courier, monospace;">QMetaObject</span> and passes in the kind of action to perform (property read, property write, method call), a method/property ID and a list of arguments. This function then forwards the arguments to actual signal/slot method corresponding to the method ID.<br />
<br />
The method IDs are normally assigned by moc when it processes a header file and generates the QMetaObject object that is used for all of Qt's introspection features. However, it is possible to specify the method IDs directly when creating a connection by using <span style="font-family: Courier New, Courier, monospace;">QMetaObject::connect(sender, signalMethodId, receiver, receiverMethodId...).</span> I'm now abusing the receiver method ID by assigning a new ID to each connection. A caveat is that internally method IDs are stored as 16-bit unsigned integers to save space, since a single QObject-based class would normally have tens of methods at most. This means there is an upper limit of ~65K unique tags that can be used to identify the connection being invoked.<br />
<br />
After removing the use of <span style="font-family: Courier New, Courier, monospace;">sender()</span> and<span style="font-family: Courier New, Courier, monospace;"> senderSignalIndex()</span> in <span style="font-family: Courier New, Courier, monospace;">QtSignalForwarder</span> the same proxy object can be re-used for a larger number of senders/receivers. A caveat is that we now have to be more careful about how this is used in the context of multiple threads. For now I've kept things simple by restricting the use of <span style="font-family: Courier New, Courier, monospace;">QtSignalForwarder::connect() </span>to objects which live on the main application thread, which is not a problem for many practical purposes. When this does need to be used with an object that lives on a background thread, a new <span style="font-family: Courier New, Courier, monospace;">QtSignalForwarder</span> instance can be created and the <span style="font-family: Courier New, Courier, monospace;">bind()</span> function used directly.Anonymoushttp://www.blogger.com/profile/18355855797731147055noreply@blogger.com1tag:blogger.com,1999:blog-2996035476649681224.post-76257294348416239702013-02-16T11:12:00.000-08:002013-02-16T11:12:55.820-08:00qt-signal-tools - Pre-packaged slot calls and connecting signals to arbitrary functions in Qt 4A useful new feature in Qt 5 is the ability to connect signals to arbitrary functions instead of just Qt signals/slots/properties, including C++11 lambdas. As <a href="https://qt-project.org/wiki/New_Signal_Slot_Syntax">this page on the Qt Project wiki</a> explains, this is especially useful when writing code to perform async operations where you often want to pass additional context to the slot.<br />
<br />
I've written a <a href="https://github.com/robertknight/qt-signal-tools">small library for Qt 4</a> which provides similar functionality. The library includes:
<ul>
<li><b>QtCallback</b> - A pre-canned QObject method call. QtCallback stores an object, a slot to call and optionally a list of pre-bound arguments to pass to the slot. This is useful if you need to pass additional context to the slot, other than the values provided by the signal.</li>
<li><b>QtSignalForwarder::connect()</b> - Connects signals from QObjects to arbitrary functions or methods or QtCallbacks. You can use this together with <i>bind()</i> and <i>function<></i> to pass additional arguments to the method other than those provided by the signal or re-arrange arguments. You can think of this as a more flexible alternative to the <i>QSignalMapper</i> class that Qt 4 provides. There are also a couple of utility features:
<ul><li><b>QtSignalForwarder::delayedCall()</b> - A more flexible alternative to <i>QTimer::singleShot()</i> which can be used to invoke an arbitrary function after a set delay.</li>
<li><b>Event connections</b> - Invoke an arbitrary function or QtCallback when an object receives a particular type of event. This is useful when the object does not have a built-in signal that is emitted in response to that event and requires less boilerplate than using <i>QObject::installEventFilter()</i></li>
</ul>
</li>
<li><b>safe_bind()</b> - A downside of connecting a signal to a function object is that the signal does not automatically disconnect if the receiver is destroyed. safe_bind() creates a wrapper around a (QObject*, function) pair which when called, invokes the function if the object still exists or does nothing and returns a default value if the object has been destroyed. You can use this together with <i>QtSignalForwarder</i> to connect a signal to an arbitrary method on a QObject which effectively 'disconnects' when the receiver is destroyed.</li></ul>
For example usage, please see <a href="https://github.com/robertknight/qt-signal-tools/blob/master/README.md">the README</a>, <a href="https://github.com/robertknight/qt-signal-tools/tree/master/examples">the examples</a> and <a href="https://github.com/robertknight/qt-signal-tools/tree/master/tests">the tests</a>.
The code is available from <a href="https://github.com/robertknight/qt-signal-tools">github.com/robertknight/qt-signal-tools</a>.
The requirements are:
<ul>
<li>Qt 4.8</li>
<li>A compiler with the TR1 standard library extensions (most C++ compilers from the past few years - including MSVC >= 2008 and GCC 4.x. I have tested with MSVC 2010 and recent GCC/Clang versions) or one which supports equivalent features from the C++11 standard library.</li>
</ul>
Compared to the implementation in Qt 5, there are a few disadvantages:
<ul>
<li>Argument type checking happens at runtime when <i>QtSignalForwarder::connect()</i> is called, similar to standard QObject signal-slot connections. <i>QObject::connect()</i> in Qt 5 can do type checking at compile time.</li>
<li>In order to do the runtime type checking, the types of arguments passed from the signal to the function or method must be registered using <i>Q_DECLARE_METATYPE()</i> or <i>qRegisterMetaType()</i></li>
<li>Using <i>QtSignalForwarder</i> does have additional overhead since a hidden proxy object is created to route the signal and arguments to the target function. I investigated using a single proxy object for all forwarded signals or a pool of proxies. Unfortunately it turns out that the <i>QObject::sender()</i> and <i>QObject::senderSignalIndex()</i> functions which are used internally have a cost that is linear in the number of connections.</li>
</ul>
Please let me know if you find this useful. If there is any other related functionality which you'd like to see, please let me know in the comments.Anonymoushttp://www.blogger.com/profile/18355855797731147055noreply@blogger.com1tag:blogger.com,1999:blog-2996035476649681224.post-62209490400527134472012-08-27T13:21:00.000-07:002012-08-27T13:21:09.990-07:00qt-mustache Templating LibraryI had a need for a templating library for use with several Qt projects. I was looking preferably for something simple that is easy to drop into a project and has a familiar syntax. Existing libraries that I found included <a href="http://www.gitorious.org/grantlee/pages/Home">Grantlee</a> (a featureful library using Django template syntax), <a href="https://bitbucket.org/aztechmedia/qustache/src">Qustache</a> and <a href="http://code.google.com/p/qctemplate/">QCTemplate</a> (a thin wrapper around Google's <a href="http://code.google.com/p/ctemplate/">CTemplate</a> library for logic-free templates which inspired Mustache). None of these were quite what I was looking for, so I wrote a small library which uses the popular <a href="http://mustache.github.com/">Mustache</a> template syntax.<br /><br />
<b>Example Usage:</b><blockquote class="tr_bq">
<pre>
#include "mustache.h"
QVariantHash contact;
contact["name"] = "John Smith";
contact["email"] = "john.smith@gmail.com";
QString contactTemplate = "<b>{{name}}</b> <a href=\"mailto:{{email}}\">{{email}}</a>";
Mustache::Renderer renderer;
Mustache::QtVariantContext context(contact);
QTextStream output(stdout);
output << renderer.render(contactTemplate, &context);
</pre>
Outputs:
<pre>
<b>John Smith</b> <a href="mailto:john.smith@gmail.com">john.smith@gmail.com</a>
</pre>
</blockquote>
The main feature, like Mustache itself, is that it doesn't have that many features. The lack of logic constructs in templates prevents application logic from ending up in the templates themselves. Other 'features' are:<br />
<br />
<ul>
<li>Lightweight. Two source files. The only dependency is QtCore.</li>
<li>Efficient.</li>
<li>Complete 'mustache' syntax support (values, sections, inverted sections, partials, lambdas, escaping). I may look at incorporating one or two facilities from <a href="http://handlebarsjs.com/">Handlebars</a> in future.</li>
<li>The standard data source is a QVariantMap or QVariantHash. There is an interface if you wish to provide your own - eg. if you wanted to use a QAbstractItemModel as the data structure to fill in a template.</li>
<li>Partial templates can be specified as an in-memory map or <name>.mustache files in a directory. You can also provide your own loader if you want to be able to fetch partial templates from a different source.</name></li>
</ul><br />
The code is available from github (BSD license): <b><a href="https://github.com/robertknight/qt-mustache">https://github.com/robertknight/qt-mustache</a></b><br />
<br />
Anonymoushttp://www.blogger.com/profile/18355855797731147055noreply@blogger.com3tag:blogger.com,1999:blog-2996035476649681224.post-46491049778040603402011-07-21T15:56:00.000-07:002011-07-22T02:28:18.136-07:00Qt InspectorWhilst debugging a widget layout problem a few days ago, I was looking around for a tool to view the structure of a Qt application without having to recompile it, or in other words, Firebug / Web Inspector for Qt widgets. I found the <a href="http://blogs.kde.org/node/799">KSpy</a> tool in the KDE repositories which is in need of some love and there are a variety of tools to aid in runtime debugging and modification of QML but not much in the way of tools for QWidget-based interfaces. Please let me know in the comments if I missed any.<br />
<br />
I have put together a simple tool called <a href="https://github.com/robertknight/Qt-Inspector">Qt Inspector</a>.<br />
<br />
Qt Inspector starts a specified application or connects to an existing Qt application and once connected can:<br />
<ul><li>Browse the object tree of Qt applications.</li>
<li>View properties of objects</li>
<li>Edit properties of objects</li>
<li>Locate a widget in the object tree by clicking on it in the application</li>
<li>Copy a reference to an object for use in a debugger (eg. to manipulate it by calling methods on it, examine member fields, setup conditional breakpoints) </li>
</ul>Here is a screenshot of Qt Inspector connected to Dolphin showing the widget tree for the settings dialog. Like the Web Inspector or Firebug, this can be used to tweak styling settings, layouts and other properties without a recompile.<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-gZW9n7V-zgs/TiinbhFZNpI/AAAAAAAAABc/F3zpGbS73bE/s1600/inspector-dolphin-settings.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="312" src="http://2.bp.blogspot.com/-gZW9n7V-zgs/TiinbhFZNpI/AAAAAAAAABc/F3zpGbS73bE/s640/inspector-dolphin-settings.png" width="640" /></a></div><br />
<br />
<b>Usage:</b><br />
<br />
Qt Inspector can either attach to an existing application or launch<br />
a specified application and then attach to it.<br />
<br />
From a terminal, this can be done with:<br />
<br />
<blockquote>qtinspector [process ID]<br />
qtinspector [program name] [args]</blockquote><br />
<b><args h3="">Design:</args></b><br />
<br />
<args h3="">Qt Inspector operates by injecting a helper library into the target process using gdb. This helper library sets up a local socket and listens for requests from the inspector process. The inspector and target process communicate via <a href="http://code.google.com/apis/protocolbuffers/">protocol buffer</a> messages over this socket.</args><br />
<br />
<args h3="">The inspector uses Qt's meta-object system to fetch the properties of an object and read/write their values, so properties need to be <a href="http://www.developer.nokia.com/Community/Wiki/How_to_use_Q_PROPERTY">declared with Q_PROPERTY</a> for them to be visible to the inspector.</args><br />
<br />
<args h3=""><b>Source:</b></args><br />
<br />
<args h3="">The <a href="https://github.com/robertknight/Qt-Inspector">code is up on GitHub</a>. Please download it and give it a whirl. Happy forking :)<b> </b></args><br />
<br />
<args h3=""><b>Update:</b> Eva Brucherseifer let me know about the <a href="http://gitorious.org/basyskom-inspector">Basyskom Inspector</a> tool in Gitorious. In addition to being able to select and inspect widgets it can also view signals and slots, application resources and take screenshots. <a href="http://gitorious.org/basyskom-inspector"><br />
</a></args><br />
<args h3=""> </args>Anonymoushttp://www.blogger.com/profile/18355855797731147055noreply@blogger.com13tag:blogger.com,1999:blog-2996035476649681224.post-72518369303602414982010-04-12T02:32:00.000-07:002010-04-12T06:11:24.037-07:00We're hiring Qt developersI'm currently working for <a href="http://www.mendeley.com">Mendeley</a>, a startup based in London. We're building software for organising, reading, annotating and collaborating on research papers (mostly in PDF format) which integrates with an online network for researchers. We're currently looking for developers to join the team working on our Qt-based desktop application for Windows, Mac and Linux.<br /><br /><h4>Essential skills are:</h4><ul><br /><li>Knowledge of C++ and experience debugging, testing and profiling C++ applications.</li><br /><li>Experience with Qt. If you're the kind of person who likes delving into the internals of Qt that's even better.</li><br /><li>Solid computer science basics.</li><br /></ul><h4>Knowledge of any of the following would be particularly useful:</h4><ul><br /><li>Model/view frameworks (especially Qt's implementation). An interest in or experience with some of the upcoming Qt technologies (eg. Qt Quick) would also be a plus.</li><br /><li>Databases (in particular, SQLite)</li><br /><li>Search/indexing frameworks (eg. Lucene)</li><br /><li>Scripting languages (eg. Python, Ruby)</li><br /><li>Version control (SVN, git).</li><br /><li>Automated testing tools (eg. QtTest).</li><br /><li>Knowledge of platform-specific APIs such as Cocoa on Mac*.</li><br /></ul><b>Involvement with open source projects is a big plus.</b> <a href="http://en.wikipedia.org/wiki/Eating_one%27s_own_dog_food">Dog fooding</a> your own software is always helpful, so if you have a background in research or even just like reading papers to find out how things work, that would also be useful.<br /><h4>If you're interested, please <a href="mailto:robert.knight@mendeley.com">get in touch</a>.</h4><br />* Though Qt abstracts away most platform details, there are times when using native APIs is necessary.Anonymoushttp://www.blogger.com/profile/18355855797731147055noreply@blogger.com4tag:blogger.com,1999:blog-2996035476649681224.post-31249093686421796192009-05-08T09:52:00.001-07:002009-05-08T10:09:50.701-07:00Konsole under JauntyAs noted in several places, some applications feel somewhat sluggish in Ubuntu Jaunty compared with Intrepid if your system has an Intel graphics card.<br /><br />There isn't a great solution for X in general yet - some options involving X.org tweaks and replacement X packages are discussed <a href="http://davyd.livejournal.com/275982.html">here</a><br /><br />Applications which render a lot of text seem to be affected quite a bit. For Qt applications there is a simple workaround, in Konsole the workaround makes tab-switching much more snappy. Start Konsole with the raster graphics mode:<br /><br />konsole -graphicssystem raster<br /><br />This also works wonders on the <a href="http://www.mendeley.com">Mendeley Desktop</a> research management software which I work on.Anonymoushttp://www.blogger.com/profile/18355855797731147055noreply@blogger.com8tag:blogger.com,1999:blog-2996035476649681224.post-11035786907148891182008-08-17T15:37:00.000-07:002008-08-17T15:41:18.795-07:00Konsole scrolling weirdnessSome users are experiencing weird visual glitches when scrolling in Konsole. The problem isn't trivial to debug but until it gets fixed there are a couple of workarounds:<ul><li>Un-hide the menu bar if it is hidden</li><li>Set the QT_USE_NATIVE_WINDOWS environment variable to '1' before starting Konsole (export QT_USE_NATIVE_WINDOWS=1)</li></ul>Anonymoushttp://www.blogger.com/profile/18355855797731147055noreply@blogger.com3tag:blogger.com,1999:blog-2996035476649681224.post-44586477461743564632008-06-07T06:29:00.001-07:002008-06-07T06:37:24.933-07:00Slow on NVidia?If you have an NVidia graphics card and Konsole in trunk seems very slow in a composited desktop (eg. KWin 'desktop effects' or compiz are enabled) then start Konsole with the --notransparency option. Intel/ATI are not affected.<br /><br />I do not know the cause of the problem, I'll post an update when I find out.Anonymoushttp://www.blogger.com/profile/18355855797731147055noreply@blogger.com3tag:blogger.com,1999:blog-2996035476649681224.post-47468705038177469302008-05-09T17:54:00.000-07:002008-05-09T19:54:23.162-07:00Singing in tune<a href="http://vizzzion.org/?blogentry=815">Sebas</a>, <a href="http://www.omat.nl/drupal/re-ramblings-6-month-cycles-and-plasma">Tom</a> and <a href="http://aseigo.blogspot.com/2008/05/re-re-ramblings-on-6-month-cycles-and.html">Aaron</a> discuss a regular 6 month release cycle. This was first brought up in Mark Shuttleworth's <a href="http://home.kde.org/~akademy07/videos/1-06-Keynote-Shuttleworth.ogg">keynote at Akademy 2007</a>. The relevant section starts around 30:00 and in the questions at 42:00. There was more to his argument than just "release every 6 months". The exact time delay was a detail, albeit an important one. For the benefit of those who weren't there and perhaps also those who were, I'll summarize his points:<ul><li>Regular, predictable releases which are synced with appropriate other projects provide a sense of rhythm and structure. It allows projects up/down and across-stream to plan better and co-operate more efficiently.</li><li>If distributors believe they can trust KDE's release schedule and release quality then they will allow a smaller safety margin between the time KDE makes a release and the time when distributors need to ship their next release. Consequently, new releases will get to users faster and hence feedback from recent developments will get back to developers faster. Gnome already benefits from this trust.</li><li>Getting the release out is the most important feature.</li><li>It would be wrong for KDE to specifically pick a distribution to sync with. Instead pick a date which 'conveniently' matches that of other software at the same level in the stack. This synchronization may be explicit or it may be "coincidental" (if arranging and publicly announcing such co-operation is unpalatable for whatever reason)</li><li>Regular time-based releases are much easier if features can be landed when they are complete, so that the primary work going on in trunk is integration, as opposed to dividing up the 6 months into slots of X months feature development, Y months bug fixing, Z weeks releasing. The kernel developers have proved that this approach can work.</li><li>The regular cycle may have to be suspended for big backwards-compatibility breaking upgrades (KDE 4)</li><li>The value of synchronization is sufficiently high that it may justify re-arranging the structure of a big project to accommodate it.</li><li>The time delay is subject to debate. Ubuntu found that 6 months works well for them because, for example, it divides evenly into a year so holidays etc. can be planned around it. The appropriate delay depends on where a project is in the stack and what its up/down and across-stream are doing. Further upstream projects can generally get away with a shorter delay because there is of the buffer provided by downstream.</li></ul>From what I recall, there was a general consensus amongst attendees in favor of the idea - which left the details to sort out. My personal experience with large projects is limited but I think the above arguments are good, particularly the key first point and the evidence from projects which have tried to follow this approach is positive on the whole.Anonymoushttp://www.blogger.com/profile/18355855797731147055noreply@blogger.com5tag:blogger.com,1999:blog-2996035476649681224.post-43624448416853585372008-04-23T03:45:00.000-07:002008-04-23T04:24:33.129-07:00Magic TrickMagic trick for Kubuntu users:<br /><br />1. Make an empty new directory ~/.compose-cache<br />2. Start a KDE application which has a text input widget (anything with a line edit or editable combo box will do)<br />3. Check ~/.compose-cache, it should now have a file in it whoose name is a long string of numbers<br /><br />All being well, your Qt/KDE/Gtk applications should now start up 50-150ms faster.<br /><br />Users of other distros are welcome to give it a try, I have only been able to test directly on Kubuntu. If you have a non-empty /var/cache/libx11/compose folder (eg. SuSE users) then this optimization is already enabled so you don't need to do anything.<br /><br />For those curious about what is going on here, this enables an optimization which Lubos (of general KDE speediness fame) came up with some time ago and was then rewritten and integrated into libx11. Ordinarily on startup applications read input method information from /usr/share/X11/locale/<your locale>/Compose. This Compose file is quite long (>5000 lines for the en_US.UTF-8 one) and takes some time to process. libX11 can create a cache of the parsed information which is much quicker to read subsequently, but it will only re-use an existing cache in /var/cache/libx11/compose or create a new one in ~/.compose-cache if the directory already exists. <br /><br /><a href="https://bugs.freedesktop.org/show_bug.cgi?id=3104">Relevant freedesktop.org bug report</a>Anonymoushttp://www.blogger.com/profile/18355855797731147055noreply@blogger.com15tag:blogger.com,1999:blog-2996035476649681224.post-75060390396048168942008-04-19T06:43:00.000-07:002008-04-19T07:23:08.077-07:00Konsole - KDE 4.1 ChangesI had a few emails recently asking for a summary of changes in the terminal and in particular 'Send Input to All' which was missing from KDE 4.0. So here are the changes in 4.1, in addition to the many bug fixes and tweaks.<ul><br /><li><b>Copy Input To</b> dialog allows input to one session to be copied to all or a subset of other sessions. (Like 'Send Input to All' in KDE 3 but more flexible)</li><br /><li>Drag and drop re-arrangement of tabs and movement of tabs between windows.</li><br /><li>Better warnings and fallbacks if starting the shell fails (due to missing binary or crash).</li><br /><li>Transparency is available by default (with an option to forcibly disable it)</li><br /><li>Support for bi-directional text rendering. (Diego Iastrubni)</li><br /><li>New 'Dark Pastels' colour scheme (adapted from one by Christoffer Sawicki)</li><br /><li>Mouse-wheel scrolling in less and other non-mouse enabled terminal applications</li> <br /></ul>Nothing ground breaking here but it should make KDE 4.1 a nice step forwards from KDE 3.5 for those who have stayed away from KDE 4.0. <br /><br />In other news, like several other KDE developers I have started using git and git-svn locally. It is a huge improvement over SVN, especially when developing experimental features that touch many parts of the code alongside bug fixes to the current trunk. It does make you wonder how you ever managed before. A quick "git branch" on my current local checkout shows 10 branches for various little features in progress, for example:<br /><br /> custom-pty-fd<br /> image-background<br /> inheritance-ui<br />* master<br /> port-to-mono<br /> profile-editor-binding<br /> profile-editor-improvements<br /> window-tab-settings<br /><br />Interestingly though and perhaps paradoxically given the open nature of the project, one of the most useful benefits is the ability to create branches to work on features without telling the whole world. There is much emphasis on the benefits of incremental development but at the same time I think it is important to be able to do some things in private so that they can arrive on the scene with a bang that gets attention. Compiz or git being good examples.Anonymoushttp://www.blogger.com/profile/18355855797731147055noreply@blogger.com27tag:blogger.com,1999:blog-2996035476649681224.post-38670046928256623402008-03-24T09:13:00.000-07:002008-03-26T09:30:40.644-07:00Users and User ResearchI'd like to second Celeste's recent posts about user-research. Having watched KDE development fairly closely for a couple of years now I agree that it is why, despite many man-hours of development, some projects just aren't as useful as they could be.<br /><br />A classic example for me is the story of my first patch to KDE. KDE's education suite includes a tool called KMPlot which takes a mathematical expression and plots it as a graph. At secondary school my teachers used a similar program called <a href="http://www.spasoft.co.uk/omnigraph.html">OmniGraph</a> heavily. Aside from looking and feeling somewhat dated it had a raft of minor irritations which could easily be fixed with access to the code. <br /><br />I wanted something similar to help with assignments at home and I found KMPlot (KDE 3.4x). I fired it up and entered a simple equation:<br /><br />y = sin(x)<br /><br />I was met with a somewhat cryptic error. It turns out that KMPlot required equations to be defined as named functions ( name(x) = expression ). Children at secondary schools in England don't learn about these until A-Levels, 4 years after they start playing with graphs. Whoops! My first patch fixed this by accepting "y = expression" equations and then re-writing them internally. <br /><br />A more signficant problem was the interface design. A common use of OmniGraph was for teachers to load it up on their PC in a classroom which was connected to a projector. They would then enter several related equations into OmniGraph and use laser-pointers, electronic white-board pens or wooden rulers to explain the relationships between the various equations and their graphs. This environment imposes a couple of major requirements:<ul><br /><li>Projected images generally have poor contrast. The equations therefore need to be displayed in a big colorful font so that they can be read at a distance by pupils in a classroom.</li><br /><li>In order to compare multiple equations and their graphs, equations need to be displayed alongside their graphs.</li><br /></ul>In KDE 3.x KMPlot only showed the graphs in the main window. Equations were hidden in a separate window and were displayed only at ordinary text size. Not difficult to fix during the design stages but really hurts its use in a classroom environment. Happily KMPlot in KDE 4.x goes some way to addressing these problems - although the interface is not as clean and uncluttered as OmniGraph. On the plus side, the graph rendering is much more attractive in KMPlot.Anonymoushttp://www.blogger.com/profile/18355855797731147055noreply@blogger.com3tag:blogger.com,1999:blog-2996035476649681224.post-91800248788833414632007-12-30T14:48:00.000-08:002007-12-31T17:46:17.327-08:00Finishing touches and embedded terminal improvementsThe KDE 4.0 release is now very close, so I have been concentrating on fixing bugs and tidying up loose ends. Many bugs and missing menu items in the embedded terminal have been fixed in recent days.<br /><br />A number of features found in KDE 3's Konsole have not been implemented to acceptable standards yet and the UI for them has been removed. Konsole/KDE 4.0 will not have:<ul><br /><li>Session management (ie. remembering the tabs open in Konsole windows when logging out)</li><br /><li>Sending input from one tab to all other tabs ("Send Input to All")</li><br /><li>Support for terminal programs resizing the terminal window</li><br /></ul>The good news is that the embedded Konsole part became much more flexible in recent days as I fixed the context menu in the part so that it exposes features which were not previously available from the embedded terminal (in KDE 3 or the KDE 4 betas)<ul><br /><li>Profiles, including the default profile, are shared with the main Konsole application. They can be manipulated in the embedded terminal via 'Manage Profiles', 'Edit Current Profile' and 'Change Profile' on the context menu.</li><br /><li>Save output as plain text or HTML</li><br /><li>Scrollback options and clearing</li><br /></ul>If there are any particularly annoying bugs which you want to see fixed before the release, please make sure they are filed on bugs.kde.org and add votes to them to help me prioritise them. Thanks to everyone who has been testing the KDE 4 pre-releases and a happy new year to all :)Anonymoushttp://www.blogger.com/profile/18355855797731147055noreply@blogger.com17tag:blogger.com,1999:blog-2996035476649681224.post-38687932642346531872007-12-11T19:18:00.000-08:002007-12-11T20:53:36.140-08:00Memory-efficient KDE 4 debuggingThe GNU Debugger (gdb) is the standard tool for debugging applications on Linux. Unfortunately starting a KDE 4 application using the gdb debugger as it comes "out of the box" takes a long time (over a minute on my laptop) and uses vast amounts of memory (> 500MB) due to the time required to load the debugging 'symbols' (class names, method names etc.). This a problem especially if you have built Qt, kdelibs and other important libraries with debugging information. If you are running on a machine with less than 1GB RAM (eg. my 512MB laptop used for development) then your system is likely to slow to a grinding halt for a while.<br /><br />As I discovered talking to other KDE hackers at <a href="http://www.fosscamp.org/">FOSSCamp 2007</a>, this means that some people never use the debugger at all, relying solely on debugging messages printed by the program as it runs.<br /><br />One solution to this is to only load debugging information for the code which you are interested in debugging. I wrote an email on the KDE developers list about how to do that <a href="http://lists.kde.org/?l=kde-devel&m=118151693509434&w=2">here</a>, which was written up more clearly by Constantin <a href="http://ascending.wordpress.com/2007/09/02/a-couple-of-gdb-tricks/">here</a> (his blog does not appear to be syndicated on PlanetKDE, so hopefully this reaches a wider audience). One important thing to bear in mind is that you can only ask gdb to set breakpoints (ie. stop execution in) functions for which debugging information has been loaded. <br /><br />Manually asking gdb to load the 3-6 libraries you need to debug a given problem can be quite a hassle. What I do is to define a few functions in my ~/.gdbinit file to load commonly used subsets of libraries. In order to debug most problems, you need the core Qt,KDE and C libraries loaded plus your application code. These all load relatively quickly and won't use too much memory, so it is usually useful to load them all together. If you need to examine the state of Qt widgets or other GUI-related things then you will need to load the QtGui library. This takes a few seconds and uses a fair amount of memory so it should be avoided otherwise.<br /><br />Add the following to your ~/.gdbinit file, <br /><pre><br />def load-common-kde-libs<br /> shar libc<br /> shar glib<br /> shar QtCore<br /> shar kdecore<br />end<br /><br />def load-gui-kde-libs<br /> shar QtGui<br /> shar kdeui<br />end<br /></pre><br />Then when debugging a KDE application, start the application with <i>set auto-solib-add off</i> (I put this command inside ~/.gdbinit as well, see the linked blog most and email above) and then interrupt it using Ctrl-C. Run <i>load-common-kde-libs</i> and then load any libraries specific to your application, usually <i>shar <appname></i> will catch them. In many cases, this will be enough information to get useful backtraces (using <i>bt</i>) and examine the state of the application. If when you run the <i>bt</i> command the backtrace includes calls to functions inside the QtGui,kdeui or other libraries near the top before the calls to functions in your application's code then you will need to load those as well and then re-run <i>bt</i> in order to find out where in your code the problem is.<br /><br />As mentioned on the <a href="http://techbase.kde.org/Development/Tutorials/Debugging/Debugging_with_GDB">KDE TechBase</a> page, there is a script in SVN (trunk/KDE/kdesdk/scripts/kde-devel-gdb) which includes really useful gdb functions for debugging KDE applications, such as printq4string (prints the contents of QString objects) and identifyq4object (prints the class name of an object which inherits from QObject).<br /><br /><b>Other KDE debugging tricks</b><ul><br /><li>Stepping through an application which was built with compile-time optimization enabled (the default) can produce some really weird results because during compilation, the structure of the code may be altered and variables or function calls can be removed ('optimised out') to improve performance. Optimizations can be disabled by passing <i>-DCMAKE_BUILD_TYPE=Debug</i> to cmake when setting up the build. The resulting programs will run more slowly, depending on how much of the Qt/KDE library stack is built without optimisations (in my case, everything from Qt and up is).</li><br /><li>Some applications in KDE (eg. dolphin, konsole) are single-instance, which means that there is only ever one process for that program. If you start a second copy of that program then it contacts the first, asks it to create a 'new instance' (usually this means a new window) and then immediately quits. Applications which are single-instance support the <i>--nofork</i> argument to prevent them from creating a new process on startup. You can find out whether an application supports this by looking at the output of <i>appname --help-kde</i>. If you are debugging such an application, you need to run it with the <i>--nofork</i> argument. In gdb you can do this by executing <i>set args --nofork</i> before running the program.</li><br /><li>Some KDE components (eg. plasma) have their own crash handlers to trigger an automatic restart or bring up a specialized bug reporting tool (eg. amarok) in case of a crash. These custom crash handlers interfere with normal debugging, and they can be disabled by passing the <i>--nocrashhandler</i> argument on startup (like the above --nofork).</li><br /></ul>Over the Christmas period I hope to find time to write this up on the <a href="http://techbase.kde.org/Development/Tutorials/Debugging">KDE TechBase page</a>. Please try out the above and reply to this post with any problems/comments/queries so I can include the answers when I get around to it.Anonymoushttp://www.blogger.com/profile/18355855797731147055noreply@blogger.com9tag:blogger.com,1999:blog-2996035476649681224.post-11882604215029335512007-10-29T15:50:00.000-07:002007-10-29T18:36:11.706-07:00FOSSCampThis weekend I was invited by Jonathan Riddell of Kubuntu fame to attend a completely unscheduled conference in Cambridge, MA called FOSSCamp. Most of the attendees were Canonical employees but there were many representatives from other projects and companies (such as gnome, Red Hat, Samba, the FSF) as well. I was joined by Troy from KDE's publicity machine and Leo, Jeff from Amarok. Thank-you to Canonical (Jonathan and Claire in particular) for inviting us. <br /><br />I have never been to an 'unconference' before, but there were many interesting sessions going. Unfortunately given that there were 3-5 happening at any one time on the first day, I missed a few that I would have liked to attend. Here is a summary of the sessions which I was able to participate in.<br /><br /><b>Suspend and Resume</b><br /><br />The first session I attended was a whistle-stop tour through the seedy and hack-filled world of what happens when your laptop goes to sleep and wakes up again, led by Matt Garratt. After a basic introduction to how suspend and resume are supposed to work, Matt explained why it doesn't always. The next few sentences allude to terrifying hacks and should not be read by young children and people of a nervous disposition. <br />The main problem faced at the moment is getting the video hardware programmed correctly on resume. Currently the BIOS is tricked into running its own video setup code on resume which may or may not work. Better documentation from the manufacturers should help but it is slow to appear. Debugging suspend and resume (using the system clock as a debug data store!) was also covered. The need for more public details on the process was raised and they will probably appear in the <a href="https://wiki.ubuntu.com">Ubuntu wiki</a> soon.<br /><br /><b>HotWire</b><br /><br /><a href="http://code.google.com/p/hotwire-shell/">HotWire</a> is a replacement for the standard shell. It is both a text-driven graphical user interface and an object-orientated command line written in Python. The front-end is designed to greatly improve the accessibility of the shell with better auto-completion and history preservation. It can also use a modern graphical UI to visualize the output of commands. For example, typing "ls" produces a list of files in the current directory with appropriate icons and columns which can be resized and arranged. Commands are executed asynchronously so that multiple commands can be executed simultaneously and their output switched between much more easily than a conventional terminal. The downside when I last tried HotWire is that it didn't feel as responsive as a normal terminal. There are also quite a few bugs to fix and polishing required to make it feel slicker and a more natural experience. <br /><br />The other aspect of HotWire, the object orientated shell, was the one which garnered much more interest. Current UNIX shells pass simple byte streams between applications. Like Microsoft's PowerShell, the idea is that passing objects between commands could allow for much more powerful command lines and less error-prone text processing. Quite a few attendees at the session were interested in having a better object shell but still wanted to use it from within their existing terminals (which include the Linux terminal on machines without X, emacs and text editors or IDEs with shell plugins). <br /> <br /><b>GNOME Online Desktop</b><br /><br />Like Hotwire, there are several aspects to Gnome's <a href="http://live.gnome.org/OnlineDesktop">online desktop</a>, some more interesting than others. One aspect is re-engineering applications to sync with online services and make use of data from them out of the box with minimal effort on the part of the user. This is something which would probably be very valueble to many KDE users as well since they could use their favorite KDE applications for work and communication and still share the results easily with friends or get access to the information regardless of the PC or other device they are using. Another aspect is 'Big Board' which, from what I saw at the demo, provides a summary view of information of interest to the user across various local and online services. It also brings the concept of an online identity more prominently onto the desktop and provides facilities to sign into various online services without having to go to their website. Plasma has a few applets which do similar things with regards to providing status snippets and updates for online and local services so it would be possible, if Plasma was working as it should be, to create something that looked like Big Board. What 'Big Board' has in its favor is a much more cohesive design from the end user's perspective such that the various parts fit together out of the box in a way which is useful. The downside is that at the moment it does not appear to be all that flexible and is limited in the range of services that it can talk to. One of the attendees suggested that it should use Evolution Data Server to get its calendar, TODO, status updates and other data. In the KDE world, the equivalent would be Akonadi. Having a side-panel which displayed aggregated summary updates from my various mail sources, calendars and so on which are spread across multiple web sites, local and remote files would certainly be quite useful. With Qt 4 eye candy it could look very pretty. At one point during the demonstration the internet connection failed and 'Big Board' became a big white empty space at the side of the screen. Better off-line support is obviously a requirement. <br /><br />Looking at the web-site there are parts of the system (such as the web sign in daemon) which seem like good candidates for cross-desktop collaboration.<br /><br />The other major aspect of the OD project is the server which stores a subset of users' settings (including desktop setup and I guess personal information and passwords etc.) online.<br /><br />There is lots to think about as far as KDE and online services are concerned. KDE's libraries (new and old) provide a good technical foundation on which some really neat applications could be built. The question is how to approach this from the user experience perspective, that is not something which I have seen a big discussion on yet. <br /><br /><b>Freedom for Web Services</b><br /><br />Brett Smith of the Free Software Foundation lead a discussion on what 'free' means in the context of online services and how the advantages of online services might be provided in a way which preserves this freedom. Apparently the FSF's position is that no centralized online service can meet their definition of free because the users of such a service cannot modify the code which is run when the service is used, even if the service provider makes the code available (which most do not). In this view, even something like Wikipedia does not qualify as free and the problem becomes a technological rather than legal one to invent a new means of delivering such services. <br /><br />There was some discussion around the distinction between freedom as it relates to software which an online service uses to process data and the freedoms users have over the data they create and store in the service. The need for an easy way to convey these data freedoms was raised. My personal view is that large centralized online services (as opposed to the FSF's theoretical alternatives) will be a reality for the foreseeable future and therefore companies need more incentives to provide users with freedom over their data. An example might be a simple labeling system. In the UK food now comes with 'traffic light' labels on the packaging to indicate in simple terms how good or bad it is for you (in terms of fats, salt, sugar, proteins etc.) which has had some success in encouraging sales of healthier food and discouraging sales of unhealthy food. Perhaps the same idea could work for online services. The idea being that if a user Mike has a choice of "Larry's Calendar Service" or "Sergey's Calendar Service" he would be able to see a simple 'freedom rating' for the service and factor that into his decision. The problem of course is that the choice might largely be determined by the service which Mike's friends use rather than the service's features. <br /><br /><b>KDE 4 Libraries and Technologies</b><br /><br />This was quite a busy session lead by Troy with discussion of the new libraries in KDE 4 and how they relate to other parts of the Linux platform. The usual topics of Qt 4 and KDE's new libraries for multimedia, hardware, search and the semantic desktop were also discussed. Leo and Jeff from Amarok were on hand to discuss the implications of these libraries in real, working applications. <br /><br />* WebKit was probably the topic which attracted the most interest as it is relevant for both Gnome and KDE developers. WebKit/KHTML's CSS 3 support was trumpeted by other developers present.<br />* The problem of the multitude of indexers available including Strigi,Tracker etc. attracted some attention. I mentioned XESAM as a specification which allows client applications to be indexer-independent for queries. Mark Shuttleworth noted that this left the problem of how applications can provide information to indexers. I didn't know this at the time, but according to the <a href="http://xesam.org">XESAM website</a> the specification is being developed incrementally and that facilities to do this might form part of the second iteration. <br /><br />One of the participants mentioned the need for a demo KDE 4 application (like Qt's qt-demo) which shows off KDE 4's new features and provides simple working code which new developers can make use of in their own applications. I agreed but pointed to the applications in kdegames and kdeedu as the best place to start until such an application is written.<br /><br />I mentioned Akonadi briefly which generated some interest. Unfortunately at the moment it is difficult to demonstrate in a really meaningful way. <br /><br /><b>Education</b><br /><br />This session became a demonstration of KDE 4's educational applications. Marble in particular attracted attention from the participants. One point that did come up in the discussion was the usefulness of being able to adjust an educational product to fit in with different cultures and environments. An example given was an educational authority which wanted to remove smoking-related items from KTuberling. In KDE 4 this should be much easier to do given the use of SVG objects which allows individual items in scenes to be adapted and removed. I attempted to given an impromptu demonstration with Inkscape, but it wasn't quite as easy as I hoped since I picked the wrong file to edit. More documentation for distributors on how to edit or create new resources for applications would be useful.<br /><br />The importance of making educational applications and games which work well on thin client systems was also raised. Firefox was cited as a problematic application because of the memory used for X pixmaps. There was hope that a WebKit browser would help in this area. For KDE, it is not easy to judge the problems until someone actually attempts to use the applications in such an environment. <br /><br /><b>Upstream Patch Flow</b><br /><br />This session was a debate and discussion on how we (upstream) can discover and manage patches produced by distributions. Ubuntu hackers pointed to a few resources such as <a href="http://merges.ubuntu.com">Ubuntu Merges</a> which can help upstreams keep track of patches. Unfortunately both upstream and downstream have the same problem in that they have a large number of suppliers and consumers respectively which all have different systems for managing and accepting patches. Something which Ubuntu can do in the short term is automated emails to upstream maintainers about patches which are added. A quick review of patches to kdebase/kdelibs showed that much of what is patched or added are changes to the build system which is less interesting than changes to the code itself.<br /><br /><b>KDE 4 Applications</b><br /><br />This session took part on the second day which was considerably quieter. I demonstrated a wide range of KDE 4 applications from various KDE projects to a small audience of about 15. There was appreciation for many of the new features, although in order to really impress people the front-end needs to be much slicker. The colour scheme which ships with the Oxygen style does not work very well on projectors or lighter flat screens at the moment, adjustments are definitely required in this area. Leo and Jeff lead a demonstration of the new features which Amarok 2 will provide including more online services, a new playlist, a much more prominent canvas for displaying contextual information and better plug-and-play support for a variety of media devices.<br /><br />I was also asked about PIM applications afterwards. As many readers will know, the PIM applications shipped with KDE 4.0 will be fairly similar to their KDE 3.5 counterparts. Akonadi should make much more exciting things possible.Anonymoushttp://www.blogger.com/profile/18355855797731147055noreply@blogger.com3tag:blogger.com,1999:blog-2996035476649681224.post-46200530320044865132007-09-27T22:10:00.000-07:002007-09-27T23:00:11.068-07:00Kickoff reduxAs Sebas' <a href="http://vizzzion.org/?blogentry=732">mentioned</a> recently, I have been working on a new implementation of the <a href="http://en.opensuse.org/Kickoff">Kickoff</a> start menu/application launcher. It is currently functional, although I have not started work on some of the presentational aspects yet (such as the background and tab styling). Hence no screenshots in this post. If you are testing KDE 4 on a system with KDE 3 installed, then in the case of applications for which both KDE 3 and KDE 4 versions are available, only the KDE 4 version will be shown. The search view is currently limited to application searches, but I hope to have Strigi searching working soon. Ideally query handlers will be shared between the launcher and the run dialog.<br /><br />The new Kickoff can be found in trunk/playground/base/kickoff-rewrite-kde4/ and it currently builds both as a standalone application ('kickoff') and a Plasma applet ('Application Launcher' in the 'Windows and Tasks' category). The Plasma applet is a simple KDE logo button which pops up the menu when clicked on.<br /><br />Hopefully Kickoff will ship with Beta 3 next week.<br /><br />This new implementation is being done from scratch using Qt 4 / KDE 4 frameworks. This is the first time I have made any real use of some of the new KDE 4 libraries; they are great to work with and a few little extras have been added as a result. For example, one minor detail I added compared to KDE 3's Kickoff is that in KDE 3, the 'My Computer' tab always has an icon of a tower desktop system. In the new KDE 4 Kickoff the icon will change to a laptop or a tower depending on what kind of system it is being run on - and that is done with a couple of lines of code.<br /><br />Finally, feedback is always welcome. If you are an OpenSuSE/KDE user who uses the Kickoff menu on a regular basis, feel free to add your thoughts to this post's comments :)Anonymoushttp://www.blogger.com/profile/18355855797731147055noreply@blogger.com25tag:blogger.com,1999:blog-2996035476649681224.post-24382707350020286042007-09-13T08:03:00.000-07:002007-09-13T08:15:47.286-07:00Funny languages in the terminalWhen I look at Arabic, Chinese or Japanese text all I see are some odd squiggles. Some text that used to produce square-looking squiggles in KDE 3 now produces more curvy squiggles in KDE 4. Apparently this is a good thing. Most of this is down to Qt 4. But I still don't know whether it is readable to people who normally see something other than squiggles.<br /><br />I had a stab at adding input method event support to Konsole in trunk recently, but I am very limited in my ability to test it. There is a <a href="http://bugs.kde.org/show_bug.cgi?id=149426 ">bug report</a> to file comments against. <br /><br />If you can read and write in these languages, or even better, understand the mysterious world of Unicode - please help me to find bugs in Konsole's language support so that I can fix them.Anonymoushttp://www.blogger.com/profile/18355855797731147055noreply@blogger.com3tag:blogger.com,1999:blog-2996035476649681224.post-18329938291292694732007-07-11T18:07:00.000-07:002007-07-11T18:46:13.124-07:00Quick updates and Akademy thanksFirst, some terminal updates:<ul><br /><li>In the past, terminal programs running in Konsole had no idea about the color scheme being used, although some tried to guess based on the value of the TERM environment variable. I found out that the rxvt terminal sets a COLORFGBG variable which is recognised by Vim and others.<br />The upshot of this is that when a color scheme with a dark background is used by Konsole , Vim will automatically pick appropriate colors for syntax highlighting. The same is true for a light background. Thanks to Robert Scott for bringing the problem to my attention.</li><br /><li>I added a handy "hidden" feature to the color schemes. Per-session random colors. This is commonly used for the background color in a color scheme so that you can tell different sessions apart at a glance - especially when you are working with thumbnails of the terminal. KDE 3 had a random hue feature, but it was random per Konsole process, which is not so useful. I say this feature is hidden because the option is not yet exposed in the GUI and requires manually editing the .colorscheme files. Adding this feature to the UI without cluttering what is currently a nice tidy dialog will require some careful thought.</li><br /><li>There was much tweaking in response to feedback from other hackers at Akademy. Cheers for the feedback :)</li><br /></ul>But the main point of this post was to thank everyone, especially the organizers, for a great Akademy. As always, the best part of this kind of conference is the people met, the discussions had and the general feeling of fraternity. KDE 4 itself still needs a lot of work before the final tarballs can be rolled, but I am confident we'll do Konqui proud.Anonymoushttp://www.blogger.com/profile/18355855797731147055noreply@blogger.com2tag:blogger.com,1999:blog-2996035476649681224.post-49112333957202808532007-06-13T10:58:00.000-07:002007-06-13T12:00:39.511-07:00Plain EnglishEveryone has their pet hates. It might be an annoying personal habit, anonymous callers or Paris Hilton. One of mine is the use of geek English in software. Aside from being a barrier to new users, it also undermines an application's status as a high quality piece of software, software that a contributor can be proud of. A few words quite widely seen in KDE circles that I find irritating are "Configure", "Initialize" and "Schema". Configure' and 'Configuration' are bad in particular because:<br /><ul><li>It is a longer and less commonly used word than the shorter alternatives ("Edit","Settings"). This appears to be true of translations into various European languages as well. I have a hypothesis that shorter, more commonly used words are quicker for us to parse when reading user interface elements and enable us to make decisions more rapidly, thereby saving time.</li><br /><li>It is an unpleasant sounding word. This might sound odd, but I feel that the choice of words or phrases contributes to how polished and attractive an application is.</li> <br /></ul>Obvious suggested replacements:<ul><br /><li>Configure -> Edit, Settings, Preferences</li><br /><li>Configuration -> Setup, Settings</li><br /><li>Intialize -> Start, Starting, Loading</li><br /></ul>As an example, on startup KNetworkManager displays some progress information when connecting to a wireless station. The labels go something like this:<br /><br />"Activation stage: Preparing device"<br />"IP configuration started"<br />"Commit IP configuration"<br />"Device activated"<br /><br />Good grief. The only information I really needed to know is:<br /><br />"Preparing"<br />"Connecting"<br />"Connected"<br /><br />A week or so ago I became sufficiently frustrated that I went and patched the most visible uses of these words out of KDE's libraries and Konqueror's various settings dialogs. Patches as always:<br /><br /><a href="http://www.robertknight.me.uk/files/kde/kdelibs_kdeui_remove_configure_v3.patch">kdelibs patch</a><br /><a href="http://www.robertknight.me.uk/files/kde/konqueror_settings_module_cleanup_v1.patch">konqueror patch</a><br /><br />Incidentally, the reason this is being posted here and not initially to the KDE development mailing lists is to inspire users to complain more about the use of overly technical and/or geeky language in their applications.Anonymoushttp://www.blogger.com/profile/18355855797731147055noreply@blogger.com28tag:blogger.com,1999:blog-2996035476649681224.post-47953352495659063252007-06-02T06:51:00.000-07:002007-06-09T05:13:47.988-07:00konsoleprofileI added a handy little tool to Konsole recently. <br /><br />konsoleprofile allows any profile options to be set from the command line. This means that it is possible to change the color scheme, font, tab titles, menu bar mode, scrollback size, icon, key bindings, cursor options and others without resorting to the GUI.<br /><br />For example, running:<br /><br />konsoleprofile colors=GreenOnBlack<br /><br />Inside the shell will change the active tab to use the green-text-on-a-black-background color scheme.<br /><br />konsoleprofile icon=kde<br /><br />Will set the icon for the active tab to the KDE icon.<br /><br />konsoleprofile showmenubar=false<br /><br />Will hide the menu bar.<br /><br />The actual parsing of the commands are done inside konsole. konsoleprofile is just a trivial shell script which puts some magic markers around the command so that Konsole interprets it as a profile change request. Aside from simplicity, this has an advantage over the D-COP days of old because it doesn't matter when it is being run locally or on another computer being accessed via SSH. The tool currently affects every tab using the same profile as the current tab, that is clearly not always desirable, but I plan to fix that shortly.Anonymoushttp://www.blogger.com/profile/18355855797731147055noreply@blogger.com14tag:blogger.com,1999:blog-2996035476649681224.post-37849006390634806522007-05-05T18:44:00.000-07:002007-05-05T19:49:51.777-07:00New Konsole lands & kdegames/edu funI have moved the new Konsole front-end back to trunk. It wasn't quite ready for the tagging of the first KDE 4 alpha release, and there are a few notable items still to implement ( eg. key bindings editor, composite transparency ) but if you are building KDE 4 from sources, please test.<br /><br />The past two weeks saw some handy additions, the most useful of which are probably the live tab titles. This allows tab titles to contain a mixture of normal text and dynamic elements which update as you navigate around directories, run different programs or connect to remote computers in the shell. The tab title defaults to a combination of the last part of the path and the running program name, but can be customised either for a specific terminal session or for a particular profile. You can specify separate formats for local activities and remote activities (ie. SSH), and Konsole will switch between them automatically.<br /><br />I finally got into sorting out Konsole's profile management as well. In KDE 3 times, these were referred to as "Sessions". A profile is a saved terminal setup which can be used as the basis for new tabs or windows. One of the profiles is set as the default and is used when you start a new Konsole. In KDE 3, some terminal options were global and others could be changed in each "session" - and this distinction was slightly arbitrary. In KDE 4, every setting is handled on a per-Profile basis. In addition to the default profile, there are a number of favorite profiles, for which a menu item is displayed in the file menu which creates a new tab with that profile when clicked. In KDE 3, every "session" type had a menu item, which caused problems for sysadmins who had 30 different types for connections to various machines - so the profiles shown on the File menu can now be specified by marking them with a star in the "Manage Profiles" dialog.<br />One often-requested feature is the ability to preview color schemes. This is implemented in the color scheme page of the dialog used to edit profiles. As you move the mouse over a color scheme, all open terminal displays using the edited profile will be redrawn in that color scheme. Fonts can also be previewed in a similar fashion. <br />In addition to profiles, a select number of options can be changed for a particular tab as before - such as character encoding, text size, history options, and in the future color scheme as well. Users who need to change the character encoding should now find it easier as Konsole now uses KDE 4's new character encoding menu which divides the encodings up nicely into groups instead of the loooong list which was there before. <br /><br />The problem with all these dialogs however is that they require use of the mouse, which is really not the point of a terminal, so next on my list of things to do is the ability to control profile options in a completely keyboard orientated fashion. <br /><br />In non-Konsole news, I saw the kde.dot complaining about excess frames etc. in kdegames and went round and fixed each of the games. This fit and finish work is really pretty easy, so it makes a great way to contribute to KDE if you have not done so before, even if you know little in the way of C++. Along the way I got round to playing quite a few of the games. They are good fun and look great thanks to the new artwork. I also played around with the latest and greatest in the kdeedu module - some of the applications ( kmplot and kalzium to name two ) have changed quite radically from KDE 3 to 4. So even though there is no new desktop shell to be seen in KDE 4.0, much has happened elsewhere.<br /><br /><span style="font-weight:bold;">Cheers to the kdeedu and kdegames teams :)</span>Anonymoushttp://www.blogger.com/profile/18355855797731147055noreply@blogger.com15tag:blogger.com,1999:blog-2996035476649681224.post-46743209492259515622007-04-18T15:41:00.000-07:002007-04-18T16:09:23.679-07:00Easter holiday terminal updateQuite a bit of progress in Konsole over the Easter holiday, summary of the best bits:<br /><ul><br /><li>Began a <b>re-write of the Konsole Part</b>. Under KDE 3.x, the Konsole part and the main application duplicate a lot of code, and much of the functionality in the main application is not available in the part. The goal is to keep the Part and application specific portions of the code much smaller in KDE 4. This should make maintenance easier and also make many/most of Konsole's KDE 4 enhancements and new features available in Kate, KDevelop, Dolphin, Yakuake etc. using the part.</li><br /><li>Implemented a <b>Yakuake-style 'background mode'</b> for Konsole. This means that a shell session is started in the background, which can be displayed or hidden instantly from anywhere by pressing a global shortcut ( currently hardcoded to F12, but eventually customisable ).</li><br /><li><b>Improved the bookmark system</b>. When creating a new bookmark, Konsole now chooses the working directory of the foreground program instead of just using that of the main shell (Useful if you nest shells or are running any interactive program in the shell) If the foreground program is SSH, this saves the current user and host as a special SSH bookmark. Oh, and I added <b>'Bookmark Tabs as Folders'</b> support.</li><br /><li>Allow the user to choose which custom sessions are shown in the main menu.</li><br /><li>Allow the user to <b>split the view left/right</b> ( in addition to top/bottom ) and support as many views open at once as the user wants. <a href="http://www.robertknight.me.uk/files/kde/konsole-split-view-2.png">Screenshot</a></li><br /><li>Began work on the <b>Konsole settings replacement</b>.</li><br /></ul><br />So getting closer to merging the branch back into trunk for everyone to play with. I will do that once it is capable of creating new custom sessions and saving / loading the settings of session types.<br /><br />Thanks also to everyone, all <b>8000+</b> of you, who took the time to complete the Konsole survey. Your feedback is much appreciated. I have summarised about 2/3rds of the results ( <a href="http://websvn.kde.org/branches/work/konsole-split-view/developer-doc/konsole-survey-findings?view=markup">available here</a> ) so far, and already this week I implemented a number of changes based on the results ( Background mode, SSH bookmarks ). There were many kind words for the developers ( past and present ), always appreciated :)Anonymoushttp://www.blogger.com/profile/18355855797731147055noreply@blogger.com18tag:blogger.com,1999:blog-2996035476649681224.post-68261185172519538152007-03-18T15:31:00.000-07:002007-03-18T15:54:48.772-07:00Konsole SurveyOne of the areas of Konsole which I really want to give a little love for KDE 4 is the terminal setup ( terminal features, colour schemes, display options, character encoding etc. ) and creation of profiles for different types of terminal.<br />To get a better idea of what users need from their terminal, I have set up a survey. There are 28 questions, all optional, and opportunities for additional comments at the end. Aside from helping me to understand how you use and set up your terminal, this also gives you a chance to prioritize a few features for KDE 4.<br />Please help improve Konsole by telling us about your needs and preferences. Completing the survey should only take 5-10 minutes of your time. Thank-you in advance.<br /><br /><a href="http://www.robertknight.me.uk/survey/public/survey.php?name=konsole_settings">Konsole Usage and Preferences Survey</a><br /><br />On the subject of surveys, I had difficulty finding a good PHP-based survey system to use. I used phpESP in the end, but recommendations about alternatives would be welcomed.<br /><br />Thanks to seele for checking the questions and spotting a few problems.<br /><br />The current roadmap is to implement the new terminal setup facilities in the next month or so, and then merge the development branch into the main KDE 4 trunk. After much tidying up of loose ends, it will be ready for the feature freeze in June.Anonymoushttp://www.blogger.com/profile/18355855797731147055noreply@blogger.com3