My program currently allows the user to draw the $ character on the screen with ncurses initialized when a key is pressed.
mvaddch(y,x,'$');
I also have a box drawn and I want to say that after the user presses a specific key, I want the $ to be erased and placed in the new position the user puts it without erasing the entire screen. I tried using erase(), but after that it would erase the entire screen and I don't want that. I want it to keep the box that was drawn. So how would I do this?
The usual way to approach this is to create windows on the screen, e.g., with newwin or subwin, and create the box and '$' in different windows while using clear/wclear to clear the appropriate places on the screen.
Keep in mind that getch/wgetch does a refresh on the window with which it is associated, so that may overwrite updates from overlapping windows.
Related
In the game I am currently making, the player will frequently need to press two buttons at the same time, for example "jump" and "left". Currently I have two buttons on the left side of the screen for left and right, and one on the right for jump. The problem is, if the player is pressing any of them, it effectively locks the other buttons, making it so you can't press them.
Is there some way, perhaps using InputEventScreenTouch, that I can detect the user's current touchscreen input every frame, and check if they are pressing one or more buttons?
Edit for more info:
Currently each button sends it's button_down() and button_up() signals to the same function in the HUD scene root, and I have simple functions to handle that:
func setLeft(val):
# This is the variable that the player
# checks each frame to see if it should move
self.get_parent().get_parent().goingLeft = val
So when the player starts pressing the left button, it sets goingLeft to true, and when they release it it sets goingLeft to false.
This should work fine, but I have discovered that if the player is touching the screen anywhere else, the buttons don't register input. For example, if I press the left button, the player starts going left, and then while I am holding that, I start pressing jump. The jump button doesn't register, because my finger is already pressing somewhere else on the screen (the left one).
It doesn't seem to matter if it is a button that I am pressing, just that Godot will only check buttons if there is exactly one input.
Is there some way to get around this?
I have found the solution, thanks to some people in the comments!
I changed the button types to TouchScreenButton nodes, and they have multitouch enabled, so I can press more than one at once, on a mobile device!
I'm seeing some odd behaviour in iOS7 with my app that only demonstrates in iOS7.
When the keyboard comes up from the bottom it pushes the screen up to focus the input box. After the keyboard is dismissed a grey space remains where it was, leaving the rest of the window pushed up. Images below will demonstrate what I'm saying.
I'm using Sencha on this webapp.
While the images show the split keyboard, this occurs for both split and non-split keyboards
I can swipe down in the grey space to bring the rest of the view back onto the screen, but that isn't a solution.
What is the cause of this behavior and how can I avoid it?
Below is a slightly modified email that I sent out describing the cause of this:
After seeking an answer for our keyboard issue but coming up empty handed I think I figured out what is going on. It seems to be related to a bug in iOS 7.
Take a look at the attached ‘normal_behaviour.jpg’ file. This shows what the split keyboard looks like from iOS 6 to iOS 8. The input field I had selected on the Apple website was near the top of the page. When I touched it the web page slid up very slightly to ensure that the input field was still visible.
In the attached ‘unwated_behaviour.jpg’ file I found an input field close to the bottom of the screen and selected it. On both iOS 6 and iOS 8 the keyboard covers the input filed, however on iOS 7 the entire webpage is slid up so you can see the input field, which is great from a user friendliness perspective, but then when the keyboard is dismissed that grey area where the keyboard was remains, and the rest of the webpage does not slide back into place.
Also of note is that once you select an input field near the bottom of the page with a split keyboard you are able to slide beyond the end of a webpage in any other website you bring up within the same Safari session. You can see a few examples of it in ‘buggy_behaviour_other_pages.jpg’.
Of course with our app we are really using a Safari webview to display pages so we are vulnerable to the same issue when using the Split keyboard. The issue does not present itself with the full keyboard.
Unwanted behaviour.jpg
++++++++++++++++++++++++++++
Normal behaviour
++++++++++++++++++++++++++++
Buggy behaviour other pages
Title pretty much says it all: I'm wondering whether it's possible to change the mouse cursor icon in response to feedback in a terminal app (e.g., a click event) from the ncurses library or another library?
For example: I am running xterm under X, and a curses application inside that xterm. I may or may not be sshed into another box.
A user clicks on an element of my cursor app -- is it possible to change the mouse cursor icon from a bar to a plus sign in response to the click?
There is some information here but I'd like a more complete resource:
Mouse movement events in NCurses
I don't believe it is. ncurses can read events from the mouse but not actually change mouse cursor settings. The terminal sends mouse movement and clicks to the ncurses program as escape sequences.
Some terminals, such as putty, will change the cursor to an arrow when a region is clickable. Otherwise, a text selection cursor is shown. But I don't think this is controllable through escape sequences.
Hope I can make the question clear.
I am working on a paint like application where users can add different objects and also text. The way to add text is that we show a dialog where user can enter text and then that text is added to the draw area.
Now we want that text should be added in the same way as in Power Point. A user clicks any where in the draw area, a rectangular text entering area is shown, where user can enter text, format it, move the rectangle to move the text and then click outside to enter the text on the drawing area.
Since the paint event of the draw area is called and every object is added to the draw area using graphics and paint, what is the best way to add text using the interface as I explained above.
Any suggestions would be appreciated.
Your best option is to place a TextBox as a child control and that will allow the user to modify the text as required. Once they finish changing the text you then remove the text box and draw the string instead. If they click the text becaues they want to change it then you put the text box back again so they can edit it.
I am trying to implement a text box where a user can type, use arrow keys, backspace, delete, etc. I would like to be able to know what is in this text box without the user needing to submit anything. I suppose I could catch keypress events, find a way to display a cursor, and basically build a min-text-editor by hand--but maybe that would be reinventing the wheel?
What I am after is rather scrabble-like. You have several letters in the top part of a window and a text box in the bottom. Each time you type a letter it disappears from the top pane so that you know when you've used them all up. I want to be able to edit that text with the arrow keys, 'cause rather than the 7 letters scrabble would give me I hope to be doing this with paragraphs.
I have the window displaying, and the source file processed and displayed as a list of allowable letters... I just want to update the list of allowable letters while the user types in their sentence. Can Xlib do this? Is there something else that might be more suitable? Thanks!
Can Xlib do this?Why yes, Xlib can do a lot of things. What you describe seems simple enough by using X's event processing and drawing functions.
Xlib is pretty crufty, though, and IMO you should only use it if you need closeness to the X protocol. (Even then there are newer replacements like XCB. But I digress.)
You might find it easier to work with a modern toolkit, like GTK+ or Qt.
For example, this might be expressed as a GtkEntry with a "key-press-event" handler.