I am getting started with the V4L2 framework on Ubuntu 10.4.
currently I am using an webcam to do some tests. I am following this documentation to start, the installation was worked fine. I downloaded and compiled the application example. The problems is video output,I call the executable using:
# modprobe -r pwc
# modprobe -v pwc fps=15 compression=3 mbufs=4 fbufs=4 size=vga
# ./capturer_mmap -D /dev/video0 -w 640*480 -p 0 | ./viewer -w 640*480 -p 0
given this output:
Output on terminal:
window size 640*480
Video bytespreline = 1280
Display:
Image byte order = LSBFirst
Bitmap unit = 32
Bitmap bit order = LSBFirst
Bitmap pad = 32
Window:
Depth = 24
Red mask = 0x00ff0000
Green mask = 0x0000ff00
Blue mask = 0x000000ff
Bits per R/G/B = 8
Image byte order = LSBFirst
Bitmap unit = 32
Bitmap bit order = LSBFirst
Bitmap pad = 32
Depth = 24
Red mask = 0x00ff0000
Green mask = 0x0000ff00
Blue mask = 0x000000ff
Bits per pixel = 32
Bytes per line = 2560
IsShared = True
XIO: fatal IO error 11 (Resource temporarily unavailable) on X server ":0.0"
after 431 requests (19 known processed) with 0 events remaining.
root#my-laptop:/home/foo/V4l2_samples-0.4.1# ./capturer_mmap -D /dev/video0 -w 640*480 -p 0 | ./viewer -w 640*480 -p 0
window size 640*480
Video bytespreline = 1280
Display:
Image byte order = LSBFirst
Bitmap unit = 32
Bitmap bit order = LSBFirst
Bitmap pad = 32
Window:
Depth = 24
Red mask = 0x00ff0000
Green mask = 0x0000ff00
Blue mask = 0x000000ff
Bits per R/G/B = 8
Image byte order = LSBFirst
Bitmap unit = 32
Bitmap bit order = LSBFirst
Bitmap pad = 32
Depth = 24
Red mask = 0x00ff0000
Green mask = 0x0000ff00
Blue mask = 0x000000ff
Bits per pixel = 32
Bytes per line = 2560
IsShared = True
XIO: fatal IO error 11 (Resource temporarily unavailable) on X server ":0.0"
after 101 requests (19 known processed) with 0 events remaining.
I have no idea how to fix this. I belive the probrem is in C code because I can to use webcam with Webcam Chesse application. Any help is very appreciated. Thanks a lot!
it looks like you are displaying the image in a completely wrong format.
when working with v4l2, you should definitely check out "libv4l" (packaged in debian, so also available in ubuntu). v4l2 allows a device to output it's frames in any of a very large number of video-formats, some of those are compressed (e.g. using jpeg).
core v4l2 does not provide any means to convert the image into a given format your application supports, so in theory your application must support all possible formats.
in order to avoid code duplication (each v4l2-capable application faces the same problem!), libv4l was created: it allows low-level access to the device, but at the sametime guarantees that the frame can be access using a few standard formats.
e.g. if the device only supports jpeg-output and your app requests RGB32 frames, libv4l will transparently convert for you.
you can even use libv4l with some LD_PRELOAD tricks, in order to make it work with applications that have been compiled without libv4l-support (just to check whether my suggestion makes sense)
Related
How do I decrease the losses by increasing power level?
Here is the code I am using:
https://github.com/maibewakoofhu/Unet
I am changing the power level using:
phy[1].powerLevel = -20.dB;
At noise level 68dB, power level = -20dB all DatagramReq are sent successfully.
At noise level 70dB, power level = -20dB the DatagramReq fails.
Now, increasing the power level to as high as 125dB, still the DatagramReq fails.
I created a simpler version of your simulation to test the SNR and packet-loss relationship:
import org.arl.fjage.RealTimePlatform
import org.arl.unet.sim.channels.BasicAcousticChannel
platform = RealTimePlatform
channel = [
model: BasicAcousticChannel,
carrierFrequency: 25.kHz,
bandwidth: 4096.Hz,
spreading: 2,
temperature: 25.C,
salinity: 35.ppt,
noiseLevel: 73.dB,
waterDepth: 1120.m
]
simulate {
node 'C', address: 31, location: [180.m, 0, -1000.m], web: 8101
node 'A', address: 21, location: [0.m, 0.m, 0.m], web: 8102
}
The web: entries allows us to interact with each of the nodes to explore what is happening. I connect to each of the nodes (http://localhost:8101/shell.html and http://localhost:8102/shell.html) and subscribe phy to see all physical layer events.
Now, from node A, I try broadcasting frames to see (at various power levels) if node C receives them:
> plvl -20
OK
> phy << new TxFrameReq()
AGREE
On node C, you'll see receptions, if successful:
phy >> RxFrameStartNtf:INFORM[type:CONTROL rxTime:3380134843]
phy >> RxFrameNtf:INFORM[type:CONTROL from:21 rxTime:3380134843]
or bad frames if not:
phy >> RxFrameStartNtf:INFORM[type:CONTROL rxTime:3389688843]
phy >> BadFrameNtf:INFORM[type:CONTROL rxTime:3389688843]
Observations:
- At plvl -20 dB, almost all frames fail.
- At plvl -10 dB, almost all frames are successful.
- At plvl -16 dB, I get a frame loss of about 19%.
The transition between all frames failing to all succeeding is expected to be quite sharp, as is typical in reality for stationary noise, as the FEC performance tends to be quite non-linear. So you'll expect big differences in frame loss rate around the transition region (in this example, at around -16 dB).
Do also note the plvl 125 dB isn't valid (range of plvl is given by phy.minPowerLevel to phy.maxPowerLevel, -96 dB to 0 dB by default). So setting that would have not worked:
> plvl 125
phy[1]: WARNING: Parameter powerLevel set to 0.0
phy[2]: WARNING: Parameter powerLevel set to 0.0
phy[3]: WARNING: Parameter powerLevel set to 0.0
phy: WARNING: Parameter signalPowerLevel set to 0.0
I am having problems configuring ALSA on my RHEL 7.5 machine.
Part of my solution is to attempt to change settings in /etc/asound.conf. I have tried numerous permutations but I continue to hear "jitter" in my sounds (.raw files).
I am using the 'aplay --dump-hw-params to get the params for my sound HW.
Using this command:
aplay --dump-hw-params Front_Center.wav
These are the results I get:
Playing WAVE 'Front_Center.wav' : Signed 16 bit Little Endian, Rate 48000 Hz, Mono
HW Params of device "default":
--------------------
ACCESS: MMAP_INTERLEAVED MMAP_NONINTERLEAVED MMAP_COMPLEX RW_INTERLEAVED RW_NONINTERLEAVED
FORMAT: S8 U8 S16_LE S16_BE U16_LE U16_BE S24_LE S24_BE U24_LE U24_BE S32_LE S32_BE U32_LE U32_BE FLOAT_LE FLOAT_BE FLOAT64_LE FLOAT64_BE MU_LAW A_LAW IMA_ADPCM S24_3LE S24_3BE U24_3LE U24_3BE S20_3LE S20_3BE U20_3LE U20_3BE S18_3LE S18_3BE U18_3LE U18_3BE
SUBFORMAT: STD
SAMPLE_BITS: [4 64]
FRAME_BITS: [4 640000]
CHANNELS: [1 10000]
RATE: [4000 4294967295)
PERIOD_TIME: (11609 11610)
PERIOD_SIZE: (46 49864571)
PERIOD_BYTES: (23 4294967295)
PERIODS: (0 17344165)
BUFFER_TIME: [1 4294967295]
BUFFER_SIZE: [92 797831566]
BUFFER_BYTES: [46 4294967295]
TICK_TIME: ALL
--------------------
I'd like to know what the values within parens and braces mean in general.
Are they ranges?
What is the difference between the use of parens vs. braces?
Thanks,
Ian
Minimum and maximum values as supported by the specific hardware device you are using.
I am going to encrypted several fields in existing table. Basically, the following encryption technique is going to be used:
CREATE MASTER KEY ENCRYPTION
BY PASSWORD = 'sm_long_password#'
GO
CREATE CERTIFICATE CERT_01
WITH SUBJECT = 'CERT_01'
GO
CREATE SYMMETRIC KEY SK_01
WITH ALGORITHM = AES_256 ENCRYPTION
BY CERTIFICATE CERT_01
GO
OPEN SYMMETRIC KEY SK_01 DECRYPTION
BY CERTIFICATE CERT_01
SELECT ENCRYPTBYKEY(KEY_GUID('SK_01'), 'test')
CLOSE SYMMETRIC KEY SK_01
DROP SYMMETRIC KEY SK_01
DROP CERTIFICATE CERT_01
DROP MASTER KEY
The ENCRYPTBYKEY returns varbinary with a maximum size of 8,000 bytes. Knowing the table fields going to be encrypted (for example: nvarchar(128), varchar(31), bigint) how can I define the new varbinary types length?
You can see the full specification here
So lets calculate:
16 byte key UID
_4 bytes header
16 byte IV (for AES, a 16 byte block cipher)
Plus then the size of the encrypted message:
_4 byte magic number
_2 bytes integrity bytes length
_0 bytes integrity bytes (warning: may be wrongly placed in the table)
_2 bytes (plaintext) message length
_m bytes (plaintext) message
CBC padding bytes
The CBC padding bytes should be calculated the following way:
16 - ((m + 4 + 2 + 2) % 16)
as padding is always applied. This will result in a number of padding bytes in the range 1..16. A sneaky shortcut is to just add 16 bytes to the total, but this may mean that you're specifying up to 15 bytes that are never used.
We can shorten this to 36 + 8 + m + 16 - ((m + 8) % 16) or 60 + m - ((m + 8) % 16. Or if you use the little trick specified above and you don't care about the wasted bytes: 76 + m where m is the message input.
Notes:
beware that the first byte in the header contains the version number of the scheme; this answer does not and cannot specify how many bytes will be added or removed if a different internal message format or encryption scheme is used;
using integrity bytes is highly recommended in case you want to protect your DB fields against change (keeping the amount of money in an account confidential is less important than making sure the amount cannot be changed).
The example on the page assumes single byte encoding for text characters.
Based upon some tests in SQL Server 2008, the following formula seems to work. Note that #ClearText is VARCHAR():
52 + (16 * ( ((LEN(#ClearText) + 8)/ 16) ) )
This is roughly compatible with the answer by Maarten Bodewes, except that my tests showed the DATALENGTH(myBinary) to always be of the form 52 + (z * 16), where z is an integer.
LEN(myVarCharString) DATALENGTH(encryptedString)
-------------------- -----------------------------------------
0 through 7 usually 52, but occasionally 68 or 84
8 through 23 usually 68, but occasionally 84
24 through 39 usually 84
40 through 50 100
The "myVarCharString" was a table column defined as VARCHAR(50). The table contained 150,000 records. The mention of "occasionally" is an instance of about 1 out of 10,000 records that would get bumped into a higher bucket; very strange. For LEN() of 24 and higher, there were not enough records to get the weird anomaly.
Here is some Perl code that takes a proposed length for "myVarCharString" as input from the terminal and produces an expected size for the EncryptByKey() result. The function "int()" is equivalent to "Math.floor()".
while($len = <>) {
print 52 + ( 16 * int( ($len+8) / 16 ) ),"\n";
}
You might want to use this formula to calculate a size, then add 16 to allow for the anomaly.
I'm unable to write 1514 bytes (including the L2 information) via write to /dev/bpf. I can write smaller packets (meaning I think the basic setup is correct), but I see "Message too long" with the full-length packets. This is on Solaris 11.2.
It's as though the write is treating this as the write of an IP packet.
Per the specs, there 1500 bytes for the IP portion, 14 for the L2 headers (18 if tagging), and 4 bytes for the checksum.
I've set the feature that I thought would prevent the OS from adding its own layer 2 information (yes, I also find it odd that a 1 disables it; pseudo code below):
int hdr_complete = 1;
ioctl(bpf, BIOCSHDRCMPLT, &hdr_complete);
The packets are never larger than 1514 bytes (they're captured via a port span and start with the source and destination MAC addresses; I'm effectively replaying them).
I'm sure I'm missing something basic here, but I'm hitting a dead end. Any pointers would be much appreciated!
Partial Answer: This link was very helpful.
Update 3/20/2017
Code works on Mac OS X, but on Solaris results in repeated "Interrupted system call" (EINTR). I'm starting to read scary things about having to implement signal handling, which I'd rather not do...
Sample code on GitHub based on various code I've found via Google. On most systems you have to run this with root privileges unless you've granted "net_rawaccess" to the user.
Still trying to figure out the EINTR issue. Output from truss:
27158/1: 0.0122 0.0000 write(3, 0x08081DD0, 1514) Err#4 EINTR
27158/1: \0 >E1C09B92 4159E01C694\b\0 E\005DC82E1 #\0 #06F8 xC0A81C\fC0A8
27158/1: 1C eC8EF14 Q nB0BC 4 V #FBDE8010FFFF8313\0\00101\b\n ^F3 W # C E
27158/1: d SDD G14EDEB ~ t sCFADC6 qE3C3B7 ,D9D51D VB0DFB0\b96C4B8EC1C90
27158/1: 12F9D7 &E6C2A4 Z 6 t\bFCE5EBBF9C1798 r 4EF "139F +A9 cE3957F tA7
27158/1: x KCD _0E qB9 DE5C1 #CAACFF gC398D9F787FB\n & &B389\n H\t ~EF81
27158/1: C9BCE0D7 .9A1B13 [ [DE\b [ ECBF31EC3 z19CDA0 #81 ) JC9 2C8B9B491
27158/1: u94 iA3 .84B78AE09592 ;DA ] .F8 A811EE H Q o q9B 8A4 cF1 XF5 g
27158/1: EC ^\n1BE2C1A5C2 V 7FD 094 + (B5D3 :A31B8B128D ' J 18A <897FA3 u
EDIT 7 April 2017
The EINTR problem was the result of a bug in the sample code that I placed on GitHub. The code was not associating the bpf device with the actual interface and Solaris was throwing the EINTR as a result.
Now I'm back to the "message too long" problem that I still haven't resolved.
I'm trying to write a Linux driver for an I2C device that seems to be slightly different from a typical device. Specifically, I need to read two bytes in a row without sending a stop bit in between, like so:
[S] [Slave Addr | 0] [A] [Reg Addr 1] [A] [Sr] [Slave Addr | 1] [Data Byte 1] [NA]
[Sr][Slave Addr | 0] [A] [Reg Addr 2] [A] [Sr] [Slave Addr | 1] [Data Byte 2] [NA] [P]
I've looked at a few ways of doing this, including i2c_transfer(), i2c_master_send() and i2c_master_recv(), but I'm not sure if they will support these. Is there any way of doing this directly with these functions that isn't horribly painful? The documentation that I've found so far hasn't been entirely clear on the matter.
EDIT#1: adding symbols key to make it readable. Courtesy to http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=blob;f=Documentation/i2c/i2c-protocol
Key to symbols
==============
S (1 bit) : Start bit
P (1 bit) : Stop bit
Rd/Wr (1 bit) : Read/Write bit. Rd equals 1, Wr equals 0.
A, NA (1 bit) : Accept and reverse accept bit.
Addr (7 bits): I2C 7 bit address. Note that this can be expanded as usual to
get a 10 bit I2C address.
Comm (8 bits): Command byte, a data byte which often selects a register on
the device.
Data (8 bits): A plain data byte. Sometimes, I write DataLow, DataHigh
for 16 bit data.
Count (8 bits): A data byte containing the length of a block operation.
[..]: Data sent by I2C device, as opposed to data sent by the host adapter.
no stop bit is send between bytes in the same read/write operation. i2c_master_recv is probably what you need.