I want to make my GTK+ applications use the new notification area in Ubuntu. How can I do this? An example is shown below:
(source: iconocast.com)
I'm not on an Ubuntu box so I can't write out any examples.
But Ubuntu's NotificationDevelopmentGuidelines page has a lot of information.
Examples in C, C#, and Python.
Debain also has a tutorial that should more or less have some commonality.
Basically your going to be tying into the NotifyOSD framework which leverages the Free Desktop Foundations D-Bus messaging system.
For development you'll need libnotify ( only online documentation I could find )
If you just want some quick results from the command line or to use in a shell script you can also use the notify-send command.
Usage:
notify-send [OPTION...] <SUMMARY> [BODY]
Example:
notify-send Test "Totally gnarly message bro"
Or you can specify an icon:
notify-send -i ../icon.jpg Image "This is a sweet picture"
There are a bunch of other options, expire time, urgency level, category.
Ubuntu doesn't follow the Notification specification that closely, they don't honor alot of the options defined by the FSF. Don't be surprised if some things don't work with ubuntu's notifier that you see working with another notifier system.
Some Other resources:
Ubuntu's NotifyOSD wiki page.
ArsTechnica has a great article on the new notification's system.
Great article on some of the flaws in Ubuntu's notification's implementation.
Related
I am editing a large batch of photos using the same steps, and want to create a program to run through terminal that will run the process for me. I am comfortable with writing in C, but I am unsure of how to start on the code/what commands to use.
When I am in GIMP, I start by opening a .xcf file, and importing the photo I wish to edit in as the bottom layer. Next, I resize the layer to 1000px wide. After that, I edit the curves with a preset I have saved, and then do the same with the brightness controls. Finally, I export the file as a .png with a specific name: 01-0xx.png, based on the number of the photo in the set.
This sounds like a job for macros or the automation tools available in Gimp:
Ref: Gimp Automate Editing https://www.gimp.org/tutorials/Automate_Editing_in_GIMP/
This tutorial will describe and provide examples for two types of
automation functions. The first function is a tool to capture and
execute “Macro” commands. The second function is a set of Automation
Tools to capture and run a “Flow” or “Process”. The code for this
tutorial is written using Gimp-Python and should be platform portable
– able to run on either Linux or Windows operating systems. *
The goal of these functions is to provide tools that speed up the
editing process, make the editing process more repeatable, and reduce
the amount of button pushing the user has to do. Taking over the
button pushing and book-keeping chores allows the user to focus on the
more creative part of the editing process.
I haven't ever used GIMP, but programs of this sort typically have automation scripting support, and this is the right place to start.
Could be done with C, but the learning curve is steep.
You can write Gimp scripts in Scheme (Lisp) or Python, and if you know C you can learn enough Python in a couple of hours. See an example of a Python batch script here.
Side note #1: Curves+Brightness contrast can be done in one single call to Curves (with a different curve of course). Each operation entails some color loss, so the fewer, the better.
Side note #2: It may be simpler to do this with without Gimp using:
The ImageMagick tool box (command called from a shell script)
An image library with any language ("pillow" for python).
Your Curves preset is just what is called a "CLUT" (Color Look-Up Table).
I have been attempting to create an app using C in Code::Blocks on Win7.
Can anyone please point me to a better documentation then the gnome site? Or failing that, can someone point me to a place I can see which signals are allowed for which widgets?
I recently wrote an app using Python and found TKinter to be very good, and every time I searched Google for help on TKinter the documentation was easy to read and understandable.
The gnome GTK documentation, however, is really bad. Yes, it does describe each function, but doesn't lead you to the other parts needed to get a full understanding of the function.
They go into great detail in some cases, actually including an entire program as an example (without comments in the code I might add,) totally obscuring the forest trying to describe the tree.
I don't want to get too bogged down in a detail of my problem now, but this is an example of my frustration.
Specifically, I am attaching a signal to an entry widget, and I can find the g_signal_connect declaration that gives the parameters needed, like the instance, the_signal, the handler and such, but nowhere does it say WHICH signals can be used.
I guess it is because each widget may use a different subset of signals for the signal, but, to date. I have not found even a list of signals available let alone which ones can be used on which widgets.
I can find the gtk_entry_new() definition, but again, that description doesn't give a list of allowable signals. Just how to call it.
I saw an example that uses the "insert_text" signal, but that isn't really right, another site says there is an "activate" signal, but that only works if the user presses enter, not if the user clicks elsewhere in the window.
Any help is appreciated.
Mark.
I've already seen that doc issue. The way the doc is generated has changed and it seems this broke some parts of GTK+ 2 generated doc. Now, you shouldn't be using GTK+ 2 in the first place. GTK+ 3 has been the stable release for years now, and GTK+ 2 should only be used in legacy projects. GTK+ 4 in on its way to be released this year.
To know which signals can be used on which widget, you just have to go to the "signals" section of the documentation page of that widget. For example, here are the signals specific to GtkEntry. Each widget doc page has a top bar with several section shortcuts, with links to the sections you want:
Top | Description | Object Hierarchy | Implemented Interfaces | Properties | Style Properties | Signals
You see the last one is about signals.
Now this is only for signals specific to the class. This is object-oriented programming, so you can also use the signals from the parent classes as well. Just click on the "Object Hierarchy" link and you'll be sent on the inheritance diagram of the class. From there you may explore the parent classes, and then their signals.
You may also want to install the Devhelp program, which give you a search-as-you-type entry and gathers the docs of lots of other libraries on which GTK+ and the GNOME platform depend (cairo, pango, etc.). Install it with your package manager, and you'll have access to offline help for all the development packages you installed, and at the versions you're really using.
Sorry my bad English. I work 3d shape with opengl on raspberry pi3(debian) for a while. I want to run my code don't use on desktop(or window). I searched but puzzled my mind. In a nutshell I want to run my code as well as in image below.
enter image description here
When I searched this topic, I have seen about EGL library but I don't know if I can use this.
If you used OpenMAX library before you know openmax doesn't use window. All image or video can run on console mode. You don't need any dosktop. I wonder this Is there a way I can use Opengl in this way ?(Can Opengl run like OpenMAX library or not) If there is any way How should I build my code ? I want render my image without desktop. I want use console mode.
Thanks your time. Best Regards.
The most straightforward solution would be to just create a fullscreen window, that has no border and no decorations (titlebar, buttons, etc.). If you want actual graphic output, there's nothing wrong with using X11. Despite some hearsay thrown around on the Internet the Xorg X11 servers are actually pretty lightweight.
If you really want to go without X11, then you should look at things like the kmscube demo https://cgit.freedesktop.org/mesa/kmscube/tree/ which does OpenGL directly to the display, without a graphics server or windowing system in between.
If you want it to be a little more abstracted, then have a look at how Wayland compositors talk to the display. The developers of the Sway Wayland compositor developed a nice abstraction library for this: https://github.com/swaywm/wlroots
You need to start display server first.
What you need could work with "xinit" which would manualy start xorg server, after that I suspect you should start "openbox" which is window manager. This way your desktop application should run as is, no changes needed.
Best practice is to create shell script for starting your application which could look like this:
set -e
xset s off
xset -dpms
xset s noblank
openbox &
cd /home/your_applicaton_directory
your_executable 2>/dev/null >/dev/null
Save this script and mark it executable whith
chmod +x
Then try to run this:
xinit /full_path_to_above_script
Hope this helps a bit... :)
Qt has a platform backend called eglfs, which lets your application run fullscreen on a single screen by using EGL and kms with very little overhead. Should work nicely with whatever OpenGL stuff you want to do.
You would just program a Qt application like normal, and launch it with ./myapp -platform eglfs from a tty.
http://doc.qt.io/qt-5/embedded-linux.html#eglfs
Hello I know that I can use something like Applescript to control different aspects of operating systems like keyboard inputs:
osascript -e 'tell application "System Events" to key code 45'
or mouse clicks:
osascript -e 'tell app "System Events" to click at {100,200}'
and many other features like volume up/down, open/close app, go to url in web browser.
Now I consider how could I control this without Applescript on OS X (macOS) I think about some low level API, preferably in C (eventually in Objective-C) that could do similar things. I am interested mainly in mouse/keyboard/pad software control (like If I will be writing virtual keyboard), opening apps, invoking shortcuts in apps. I think I will use Applescript at the end and execute it's scripts via C. But I am also interested in more low level programming and libraries in C?
What is the best way to execute such apple scripts from other languages like C? I think about something like
system(osascript -e 'tell application "System Events" to key code 45')
But maybe there are better functions/libraries like osascript("here cmd");
How about, before going from AppleScript upto C, I recommend a half-way step by using the "AppleScript Objective C" Programing Language, Its simple because your probably good at applescript and you can also use a much larger range of code from Objective C, ITS AppleScript In Objective C!! :D
You can do it in Xcode! hope this might help a little :)
I know that some of the Microsoft employees are members of StackOverflow like the famous Raymond Chen to Larry Osterman (enginner of the Vista's audio stack and per-application sound controlling mechanism) and we know Jeff Atwood is here too. So maybe we can learn some lessons about managed code in core Windows componenets straight from the horse's mouth.
I have downloaded all leaked Windows Longhorn builds (from the "obvious" sources) and poked around to find managed code with tools like "dotPeek" and ".net OR not". I found that managed code was declining in every build after the august 2004 "longhorn reset". But i even find "windows movie maker" written in managed code.
So, here is the question: What are the diffuculties of writing core os components in managed code?
I'm sure there's other considerations, but this is a reasonably obvious one that springs to mind:
Managed code components require a specific version of the managed runtime, and, IIRC, a process can have only one instance of the managed runtime in it. Right off the bat, this rules out using managed code for shared components - since an app and one or more of its components could require different versions of the runtime, and limits its use to application-style components.
Also keep in mind that more parts of Windows are actually 'shared components' than might be immediately obvious. While you might think of Explorer as a form of application, as soon as an app opens a File/Open common dialog, it's now got a bunch of Explorer components within it, listing the available files and directories.