Prevent macOS generic keyboard driver from capturing usb-device - c

I'm trying to write a macOS user-level driver in C for a usb-device (a pentablet with buttons).
Currently this tablet gets recognized as generic mouse and generic keyboard from the system. Since the shortcuts applied to the buttons of the pentablet are not customizeable, I'd like to write my own driver for that.
What's working:
I am able to read the raw-data with hidapi (http://www.signal11.us/oss/hidapi/) from the device, which looks something like this:
A B C D E F G H
10 192 228 50 157 43 0 0
I figured out, that when I use the pen the values of the columns B-H change according to the pens position, pressure and click.
My problem:
However, I cannot figure out how to access the buttons of the device. Each time I press one of them, their hardcoded key-combination gets triggered. Since the values of column A never change, I assume that the device is still captured as a generic keyboard by the system, and so this column never shows me the currently pressed button and triggers it's key-combination.
Each time I press one of these buttons they all trigger a holding ALT/Option + Shift, additionally some of them trigger a character, and one of them triggers the volume up.
So, my approach was to use a codeless kext for preventing the system from capturing the device. But this doesn't work either - the device still gets captured by the system as a generic keyboard.
I disabled csrutil and with kextload, having my kext located in /Library/Extensions, I get a successfull message, that the kext is loaded:
Warnings:
Personality CFBundleIdentifier differs from containing kext's (not necessarily a mistake, but rarely done):
Tablet
Code Signing Failure: code signature is invalid
Warnings:
Personality CFBundleIdentifier differs from containing kext's (not necessarily a mistake, but rarely done):
Tablet
/Library/Extensions/foobartablet.kext appears to be loadable (not including linkage for on-disk libraries).
kext-dev-mode allowing invalid signature -67050 0xFFFFFFFFFFFEFA16 for kext "/Library/Extensions/foobartablet.kext"
kext signature failure override allowing invalid signature -67050 0xFFFFFFFFFFFEFA16 for kext "/Library/Extensions/foobartablet.kext"
Loading /Library/Extensions/foobartablet.kext.
Here's my info.plist
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>BuildMachineOSBuild</key>
<string>13C64</string>
<key>CFBundleDevelopmentRegion</key>
<string>English</string>
<key>CFBundleGetInfoString</key>
<string>1.0 Copyright © Adis Durakovic</string>
<key>CFBundleIdentifier</key>
<string>com.adisdurakovic.huitablet</string>
<key>CFBundleInfoDictionaryVersion</key>
<string>6.0</string>
<key>CFBundlePackageType</key>
<string>KEXT</string>
<key>CFBundleShortVersionString</key>
<string>Huion Tablet 1.0</string>
<key>CFBundleSignature</key>
<string>????</string>
<key>CFBundleVersion</key>
<string>1.0</string>
<key>DTCompiler</key>
<string>com.apple.compilers.llvm.clang.1_0</string>
<key>DTPlatformBuild</key>
<string>5A2053</string>
<key>DTPlatformVersion</key>
<string>GM</string>
<key>DTSDKBuild</key>
<string>13A595</string>
<key>DTSDKName</key>
<string>macosx10.9</string>
<key>DTXcode</key>
<string>0501</string>
<key>DTXcodeBuild</key>
<string>5A2053</string>
<key>IOKitPersonalities</key>
<dict>
<key>Tablet</key>
<dict>
<key>CFBundleIdentifier</key>
<string>com.apple.kpi.iokit</string>
<key>IOClass</key>
<string>IOService</string>
<key>IOProviderClass</key>
<string>IOUSBDevice</string>
<key>idVendor</key>
<string>9580</string>
<key>idProduct</key>
<string>110</string>
<key>bcdDevice</key>
<string>12288</string>
<key>IOProbeScore</key>
<integer>200000</integer>
</dict>
<key>TabletNew</key>
<dict>
<key>CFBundleIdentifier</key>
<string>com.apple.kpi.iokit</string>
<key>IOClass</key>
<string>IOService</string>
<key>IOProviderClass</key>
<string>IOUSBHostDevice</string>
<key>idVendor</key>
<string>9580</string>
<key>idProduct</key>
<string>110</string>
<key>bcdDevice</key>
<string>12288</string>
<key>IOProbeScore</key>
<integer>300000</integer>
</dict>
</dict>
<key>OSBundleLibraries</key>
<dict/>
</dict>
</plist>
And here's also my ioreg -i -w 0 -l -n "PenTablet" output which I used for device matching:
https://adisdurakovic.com/pentablet.txt
What am I doing wrong here?

Since you mention SIP, I assume you're testing this on 10.11 or newer. Your IOProviderClass is set to IOUSBDevice - this will only work for 10.10 and older. 10.11 introduced a completely new USB stack, the new class name is IOUSBHostDevice. I know that's not what IORegistryExplorer/ioreg say: there's a translation going on somewhere to keep old userspace apps ticking over. In the kernel, drivers matching IOUSBHostDevice will take precedence over those matching IOUSBDevice. If you want to support both, you can just add an extra personality to your codeless kext. For kexts with code, you'll need to create two versions of your kext if the legacy support doesn't work in your case.
Another thing that could affect it is probe score, although in theory your idVendor + idProduct + bcdDevice rule should already have a very high score.

Related

Broadcast an Intent from Historical Broadcast

The story behind it: I'm trying to command an Android box (with a proprietary launcher) that also manages TV channels. To enter the channel section it is not sufficient to type a number, but a specific key must be pressed. I want to find a way to replicate that key, and then use this command on Home Assistant. I could try with a bluetooth sniffer, but it will be after the failure of my actual attempt.
I ran this adb command after pressing the specific key for TV channels:
adb.exe shell dumpsys activity broadcasts history
And the last broadcast in history is this (timvision is the name of the box):
Historical Broadcast foreground #0:
BroadcastRecord{560a0f4 u0 android.intent.action.GLOBAL_BUTTON} to user 0
Intent { act=android.intent.action.GLOBAL_BUTTON flg=0x10000010 (has extras) }
targetComp: {timvision.launcher/timvision.launcher.TimVisionKeyReceiver}
extras: Bundle[{android.intent.extra.KEY_EVENT=KeyEvent { action=ACTION_UP, keyCode=KEYCODE_LAST_CHANNEL, scanCode=377, metaState=0, flags=0x8, repeatCount=0, eventTime=10092564, downTime=10092515, deviceId=27, source=0x301, displayId=-1 }}]
caller=android 3514:system/1000 pid=3514 uid=1000
enqueueClockTime=2022-04-16 14:43:54.577 dispatchClockTime=2022-04-16 14:43:54.578
dispatchTime=-3s691ms (+1ms since enq) finishTime=-3s502ms (+189ms since disp)
resultTo=null resultCode=0 resultData=null
nextReceiver=1 receiver=null
Deliver +189ms #0: (manifest)
priority=0 preferredOrder=0 match=0x0 specificIndex=-1 isDefault=false
ActivityInfo:
name=timvision.launcher.TimVisionKeyReceiver
packageName=timvision.launcher
enabled=true exported=true directBootAware=false
resizeMode=RESIZE_MODE_RESIZEABLE
Is possible to replicate this broadcast ? I tried this (with extra_key values too) but seems is not allowed:
adb.exe shell am broadcast -a android.intent.action.GLOBAL_BUTTON -n timvision.launcher/timvision.launcher.TimVisionKeyReceiver
Error:
java.lang.SecurityException: Permission Denial: not allowed to send broadcast android.intent.action.GLOBAL_BUTTON from pid=32487, uid=2000
Alternatives or ideas are welcome.
Thanks
It is not the solution to the question but it is the solution to my problem. So I post it.
The key to enter in the TV channels section is listed on the official Android documentation and is KEYCODE_LAST_CHANNEL with code 229.
On Home Assistant the service to launch is this:
service: androidtv.adb_command
data:
command: input keyevent 229
target:
entity_id: media_player.your_android_tv_entity

SMTPSenderRefused when sending mail from GAE dev_appserver on gmail

Here are my email related dev_appserver options:
--smtp_host=smtp.gmail.com --smtp_port=25 --smtp_user=me#mydomain.com --smtp_password="password"
Now, this still doesn't work and every time Google release a new dev_appserver I have to edit api/mail_stub.py to get things to work locally as per this S/O answer.
However, even this workaround has now stopped working. I get the following exception:
SMTPSenderRefused: (555, '5.5.2 Syntax error. mw9sm14633203wib.0 - gsmtp', <email.header.Header instance at 0x10c9c9248>)
Does anyone smarter than me know how to fix it?
UPDATE
I was able to get email to send on dev_appserver by using email addresses (eg. for sender and recipient) in their 'plain' format of a simple string (name#domain.com) rather than using the angle bracket style (Name <name#domain.com>). This is not a problem in production: recipients and sender email addresses can use angle brackets in the mail.send_mail call. I raised a ticket about this divergent behaviour between dev_appserver and production: https://code.google.com/p/googleappengine/issues/detail?id=10211&thanks=10211&ts=1383140754
Looks like it's because the 'sender' is now stored as a "email.header.Header" instance in the dev server instead of a string (since SDK 1.8.3 I think).
From my testing, when a 'From' string like "Name " is passed into smtplib.SMTP.sendmail, it parses the string to find the part within angle brackets, if any, to use as the SMTP sender, giving "". However, if this parameter is an "email.header.Header", then is just converts to string and uses it without further parsing, giving ">", thus causing the problem we're seeing.
Here's the patch I just posted on the issue tracker to google/appengine/api/mail_stub.py to convert this parameter back to a string (works for me):
--- google/appengine/api/mail_stub-orig.py 2014-12-12 20:04:53.612070031 +0000
+++ google/appengine/api/mail_stub.py 2014-12-12 20:05:07.532294605 +0000
## -215,7 +215,7 ##
tos = [mime_message[to] for to in ['To', 'Cc', 'Bcc'] if mime_message[to]]
- smtp.sendmail(mime_message['From'], tos, mime_message.as_string())
+ smtp.sendmail(str(mime_message['From']), tos, mime_message.as_string())
finally:
smtp.quit()
Another alternative is to patch the SMTP server that you use for testing the app engine mail functionality in your dev environment (instead of patching mail_stub.py).
For example, I'm using subethasmtp Wiser and was able to work around this issue by patching org.subethamail.smtp.util.EmailUtils.extractEmailAddress to accept nested angle brackets (details posted here).

Solr 4: disable compression on stored fields: how to actually configure custom codec?

The short question is :
I want to disable stored field compression on Solr 4.3.0 index. After reading :
http://blog.jpountz.net/post/35667727458/stored-fields-compression-in-lucene-4-1
http://wiki.apache.org/solr/SimpleTextCodecExample
http://www.opensourceconnections.com/2013/06/05/build-your-own-lucene-codec/
I've decided to follow the path described there, and make my own codec. I'm pretty sure I've followed all the steps, however, when I actually try to use my codec (affectionatelly named "UncompressedStorageCodec"), I get the following error in Solr log:
java.lang.IllegalArgumentException: A SPI class of type org.apache.lucene.codecs.PostingsFormat with name 'UncompressedStorageCodec' does not exist. You need to add the corresponding JAR file supporting this SPI to your classpath.
The current classpath supports the following names: [Pulsing41, SimpleText, Memory, BloomFilter, Direct, Lucene40, Lucene41]
at org.apache.lucene.util.NamedSPILoader.lookup(NamedSPILoader.java:109)
From the output I get that Solr is not picking up the jar with my custom codec, and I don't get why?
Here's all the horriffic details:
I've created a class like this:
public class UncompressedStorageCodec extends FilterCodec {
private final StoredFieldsFormat fieldsFormat = new Lucene40StoredFieldsFormat();
protected UncompressedStorageCodec() {
super("UncompressedStorageCodec", new Lucene42Codec());
}
#Override
public StoredFieldsFormat storedFieldsFormat() {
return fieldsFormat;
}
}
in package: "fr.company.project.solr.transformers.utils"
The FQDN of "FilterCodec" is: "org.apache.lucene.codecs.FilterCodec"
I've created a basic jar file out of this (exported it as jar from Eclipse).
The Solr installation I'm using to test this is the basic Solr 4.3.0 unzipped, and started via it's embedded Jetty server and using the example core.
I've placed my jar with the codec in [solrDir]\dist
In:
[solrDir]\example\solr\myCore\conf\solrconfig.xml
I've added the line:
<lib dir="../../../dist/" regex="myJarWithCodec-1.10.1.jar" />
Then in the schema.xml file, I've declared some fieldTypes that should use this codec like so:
<fieldType name="string" class="solr.StrField" sortMissingLast="true" omitNorms="true" postingsFormat="UncompressedStorageCodec"/>
<fieldType name="string_lowercase" class="solr.TextField" positionIncrementGap="100" omitNorms="true" postingsFormat="UncompressedStorageCodec">
<!--...-->
</fieldType>
Now, if I use the DataImportHandler component to import some data into Solr, at commit time it tells me:
java.lang.IllegalArgumentException: A SPI class of type org.apache.lucene.codecs.PostingsFormat with name 'UncompressedStorageCodec' does not exist. You need to add the corresponding JAR file supporting this SPI to your classpath.
The current classpath supports the following names: [Pulsing41, SimpleText, Memory, BloomFilter, Direct, Lucene40, Lucene41]
at org.apache.lucene.util.NamedSPILoader.lookup(NamedSPILoader.java:109)
What I find strange is that the above mentioned codec jar also contains some Transformers for the DataImportHandler component. And those are picked up fine. Also, other jars placed in the dist folder (and declared in the same way in solrconfig.xml), like the jdbc driver are picked up fine. I'm guessing that for the codec there's this SPI thingy which loads things differentlly, and there's somethign he's missing...
I've also tried placing the codec jar in:
[solrDir]\example\solr-webapp\webapp\WEB-INF\lib\
as well as inside the WEB-INF\lib folder of the solr.war file, which is found in:
[solrDir]\example\webapps\
but I'm still getting the same error.
So basically, my question is, what's missing so that my codec jar is picked up by Solr?
Thanks
I'm going to answer this question myself since it sort of become moot due to some benchmarks I've made: long story short, I had arrived at the (wrong) conclusion that for really large stored fields, Solr 3.x and 4.0 (without field compression) is faster than Solr 4.1 and above (with field compression). However that was mostly due to some errors in my benchmarks. After repeating them I've obtained results where when you go from non-compressed to compressed fields even for very large stored fields, the index time is between 0% and 15% slower, which is really not bad at all, considering that afterwards queries on the compressed fields indexes are 10-20% times faster (the document fetching part).
Also, here's some remarks on how to speed up indexing:
Use the DataImportHandler plugin. It bypasses the Solr Rest (HTTP based) API and writes directly to the Lucene index.
Check out said plugins sources to see how it accomplishes this, and do your own plugin if the DataImportHandler doesn't meet your needs
If for whatever reason you want to stick to the Solr Rest API, use ConcurrentUpdateSolrServer and play around with the queue size and number of threads parameters. It will normally be a lot faster (up to 200% in my case) than the basic HttpSolrServer.
Don't forget to enable the javabin data serialization like this:
ConcurrentUpdateSolrServer solrServer = new ConcurrentUpdateSolrServer("http://some.solr.host:8983/solr", 100, 4);
solrServer.setRequestWriter(new BinaryRequestWriter());
I'm explicitly showing the code because I believe there migth be a small bug here:
If you look at the ConcurrentUpdateSolrServer constructor, you'll see that by default it already sets the request writer to binary:
//the ConcurrentUpdateSolrServer initializes HttpSolrServer objects using this constructor:
public HttpSolrServer(String baseURL, HttpClient client) {
this(baseURL, client, new BinaryResponseParser());
}
However after debugging I've noticed that if you don't explicitly call the setWriter method with the Binary writer argument, it will still use the XmlSerializer.
Going from XML to Binary serialization reduces the size of my documents about 3 times as they are being sent to the server. This makes my index times for this case about 150-200% faster.
I have recently tried and succeeded to get something very similar to work. The only difference is that I want to enable the best compression instead of no compression, and Solr defaults to the fastest compression. I also got the "SPI class [...] does not exist" error at some point, and here is what I have found out from various articles, including the ones you have linked to.
Lucene uses SPI to find the codec classes to load. Lucene requires the list of codec classes be declared in the file "org.apache.lucene.codecs.Codec", and the file must be on the class path. To get Solr to load the file: When you create your JAR file "myJarWithCodec-1.10.1.jar", make sure that it contains a file at "META-INF/services/org.apache.lucene.codecs.Codec". The file should have one full class name per line, like this:
org.apache.lucene.codecs.lucene3x.Lucene3xCodec
org.apache.lucene.codecs.lucene40.Lucene40Codec
org.apache.lucene.codecs.lucene41.Lucene41Codec
org.apache.lucene.codecs.lucene42.Lucene42Codec
fr.company.project.solr.transformers.utils.UncompressedStorageCodec
And in solrconfig.xml, replace:
<codecFactory class="solr.SchemaCodecFactory" />
with:
<codecFactory class="fr.company.project.solr.transformers.utils.UncompressedStorageCodec" />
You might also need to remove postingsFormat="UncompressedStorageCodec" from schema.xml if Solr complains. I think this particular parameter is for specifying the postings format, not the codec. Hope it helps.

How to get voice in raw format by using mic in linux

I'm writting a speech recognition program with cmu sphinx. It needs get a .raw audio file to deal with. How can I get voice from my mic in raw format? I have googled for that. They say I coud read from /dev/dsp but I can't find that file/device. I'm in ArchLinux with alsa. Linux version 3.2.9-1-pae.
madper#myhost /dev % ls
agpgart ptmx tty23 tty58 vcs28 vcs62 vcsa39
autofs pts/ tty24 tty59 vcs29 vcs63 vcsa4
block/ random tty25 tty6 vcs3 vcs7 vcsa40
bsg/ rfkill tty26 tty60 vcs30 vcs8 vcsa41
btrfs-control rtc# tty27 tty61 vcs31 vcs9 vcsa42
bus/ rtc0 tty28 tty62 vcs32 vcsa vcsa43
char/ sda tty29 tty63 vcs33 vcsa1 vcsa44
console sda1 tty3 tty7 vcs34 vcsa10 vcsa45
core# sda2 tty30 tty8 vcs35 vcsa11 vcsa46
cpu/ sda3 tty31 tty9 vcs36 vcsa12 vcsa47
cpu_dma_latency sda4 tty32 ttyS0 vcs37 vcsa13 vcsa48
disk/ sda5 tty33 ttyS1 vcs38 vcsa14 vcsa49
dri/ sda6 tty34 ttyS2 vcs39 vcsa15 vcsa5
fb0 sda7 tty35 ttyS3 vcs4 vcsa16 vcsa50
fd# sda8 tty36 uinput vcs40 vcsa17 vcsa51
freefall shm/ tty37 urandom vcs41 vcsa18 vcsa52
full snapshot tty38 v4l/ vcs42 vcsa19 vcsa53
fuse snd/ tty39 vcs vcs43 vcsa2 vcsa54
hidraw0 stderr# tty4 vcs1 vcs44 vcsa20 vcsa55
hidraw1 stdin# tty40 vcs10 vcs45 vcsa21 vcsa56
hpet stdout# tty41 vcs11 vcs46 vcsa22 vcsa57
initctl| tty tty42 vcs12 vcs47 vcsa23 vcsa58
input/ tty0 tty43 vcs13 vcs48 vcsa24 vcsa59
kmsg tty1 tty44 vcs14 vcs49 vcsa25 vcsa6
log= tty10 tty45 vcs15 vcs5 vcsa26 vcsa60
loop-control tty11 tty46 vcs16 vcs50 vcsa27 vcsa61
mapper/ tty12 tty47 vcs17 vcs51 vcsa28 vcsa62
mcelog tty13 tty48 vcs18 vcs52 vcsa29 vcsa63
media0 tty14 tty49 vcs19 vcs53 vcsa3 vcsa7
mei tty15 tty5 vcs2 vcs54 vcsa30 vcsa8
mem tty16 tty50 vcs20 vcs55 vcsa31 vcsa9
net/ tty17 tty51 vcs21 vcs56 vcsa32 vga_arbiter
network_latency tty18 tty52 vcs22 vcs57 vcsa33 video0
network_throughput tty19 tty53 vcs23 vcs58 vcsa34 watchdog
null tty2 tty54 vcs24 vcs59 vcsa35 zero
port tty20 tty55 vcs25 vcs6 vcsa36
ppp tty21 tty56 vcs26 vcs60 vcsa37
psaux tty22 tty57 vcs27 vcs61 vcsa38
Is there an other way to get voice? Use GStreamer? Or can I use google's api for getting the text by uploading a audio file? any other advice is also welcome.
Thank you
Here are some useful links that will teach you how to capture voice data using ALSA -
Resource 1
Linux Journal
Here is a link that will give you some insights about ALSA and its configuration.
This is the official ALSA API Reference.
This may be out of context but here is a list of recommendations that you should keep in mind while doing audio programming.
If you would like some alternatives to ALSA then I would suggest to take a look at Port Audio.
/dev/dsp is OSS, which is the audio subsystem used by older versions of Linux. Use either the GStreamer (preferable) or ALSA (acceptable) APIs to record audio.
Check /dev/snd/ folder. Find pcmC0D0c that is raw PCM audio similar to OSS /dev/dsp. Hope it will help.

Apple iWork Mime Types

I was wondering what the mime type for iWork's Pages is? And also what the mime type is for the rest of the software in the iWork suite? I looked around online and I didn't see it anywhere.
I recently needed this for work and ended up just uploading some files and querying the mimetypes. I found the following:
keynote: application/x-iwork-keynote-sffkey
pages: application/x-iwork-pages-sffpages
numbers: application/x-iwork-numbers-sffnumbers
2021 Update
Please note that this answer is now outdated and the following content types have been approved by IANA:
application/vnd.apple.pages
application/vnd.apple.keynote
application/vnd.apple.numbers
Looks like Apple doesn't much care, since installing iWork does not add any mime type information to any of its system mime-type info reps (in /etc/cups and /etc/apache2), "Get Info" on an iWork file shows no mime-type, etc. The only hint I've found is in Page's info.plist (a copy's online here) which mentions:
<key>public.filename-extension</key>
<array>
<string>pages</string>
</array>
<key>public.mime-type</key>
<array>
<string>application/x-iwork-pages-sffpages</string>
</array>
and a similar one for filename-extension "template", with "-sfftemplate" as the suffix instead of "-sffpages".
application/vnd.apple.keynote
application/vnd.apple.pages
application/vnd.apple.numbers
Just got it approved with IANA. You will find the list at the below link.
https://www.iana.org/assignments/media-types/media-types.xhtml.
You can use mime-db https://github.com/jshttp/mime-db to validate using javascript
This URL shows some other types in case new readers need it:
Apache Jira Issue TIKA-588
application/vnd.apple.keynote, application/vnd.apple.pages, application/vnd.apple.numbers
Actually, those files are all a masked zipfile. So, some systems might indicate their mimetype simply as application/zip.

Resources