I would like to send a byte string to my device that gets the data from serial port (RS422).
I have to send my byte string as "\x0a\x23\x01..." but Python (2.7) gets \x0a as a new line character (ASCII), therefore I can't send the rest of the bytes.
How can I send "\x0a" as a byte, not as a new line? Below, you can find my code (In the code below, I just write my bytestring to a port, then read&print it on the same device from another port).
import serial
import binascii
class test_Tk():
def __init__(self):
self.serialPortWrite = serial.Serial('COM6', 921600, timeout=0.5)
self.serialPortRead = serial.Serial('COM7', 921600, timeout=0.5)
self.byte1 = "\x0a"
self.byte2 = "\x23"
self.byte3 = "\x01"
self.bytestring = "\xaa\xab\xac\xad"
self.data = self.byte1 + self.byte2 + self.byte3 + self.bytestring
self.serialPortWrite.write(self.data)
self.read = self.serialPortRead.readline(100)
print binascii.hexlify(self.read)
test_Tk()
I get output "0a", however I should have gotten "0a2301aaabacad".
Thanks!
Related
I'm getting Jenkins console logs and writing them into an output stream like this:
ByteArrayOutputStream stream = new ByteArrayOutputStream()
currentBuild.rawBuild.getLogText().writeLogTo(0, stream)
However, the downside of this approach is that writeLogTo() method is limited to 10000 lines:
https://github.com/jenkinsci/stapler/blob/master/core/src/main/java/org/kohsuke/stapler/framework/io/LargeText.java#L572
In this case, if Jenkins console log is more than a 10000 lines then the data from line 10000 and up is lost and not written into a buffer.
I'm trying to re-write the above approach in the most easiest way to account for cases when the log has more than 10000 lines.
I feel like my attempt is very complicated and error-prone. Is there an easier way to introduce a new logic?
Please note that the code below is not tested, this is just a draft of how I'm planning to implement it:
ByteArrayOutputStream stream = new ByteArrayOutputStream()
def log = currentBuild.rawBuild.getLogText()
def offset = 0
def maxNumOfLines = 10000
# get total number of lines in the log
# def totalLines = (still trying to figure out how to get it)
if (totalLines > maxNumOfLines) {
def numOfExecutions = round(totalLines / maxNumOfLines)
}
for (int i=0; i<numOfExecutions; i++) {
log.writeLogTo(offset, stream)
offset += maxNumOfLines
}
writeLogTo(long start, OutputStream out)
According to comments this method returns the offset to start the next write operation.
Seems code could be like this
def logFile = currentBuild.rawBuild.getLogText()
def start=0
while(logFile.length()>start)
start=logFile.writeLogTo(start, stream)
stream could be a FileOutputStream to avoid reading whole log into memory.
There is another method readAll()
So, the code could be simple as this to read whole log as text:
def logText=currentBuild.rawBuild.getLogText().readAll().getText()
Or if you want to transfer it to a local file:
new File('path/to/file.log').withWriter('UTF-8'){ w->
w << currentBuild.rawBuild.getLogText().readAll()
}
I have a node JS program to concat a String to an encrypted message and base-64 encoding it. in my server program when I tried to base-64 decode, i am not getting the originally generated encrypted message.
i have replicated that problem in simple program below.
In this program,
I pass an encrypted and base 64 encoded message,
decode them, concatenating another string
encode the finalMessage
decode the finalMessage
Split them to get the encrypted message
encode the encrypted message to compare with original message.
the Result is - the original message passed to this function is not the same as the final message.
function decodeAndEncode(message) {
console.log("message---"+message)
const buffer = Buffer.from(message, 'base64');
console.log("buffer---"+buffer)
const updatedStringBuffer = Buffer.from('648f3ec157637553f170bccfe56bc32058d11741d016bf120e7001148b19a4d1');
const finalEncodedMsg = Buffer.from(updatedStringBuffer+"|"+buffer).toString('base64')
console.log("updatedMessage ---"+finalEncodedMsg);
const updatedMessageBuffer = Buffer.from(finalEncodedMsg, 'base64');
console.log("updatedMessageBuffer ---"+updatedMessageBuffer);
const getBackOriginalMsg = updatedMessageBuffer.toString('utf-8',updatedStringBuffer.length+1);
console.log("getBackOriginalMsg---"+getBackOriginalMsg);
const encodedMessageBack = Buffer.from(getBackOriginalMsg).toString('base64')
console.log("encodedMessageBack--- "+encodedMessageBack)
}
const message = 'KCof0N56Z0X5piDvPO4FRL6e80oOxxPzzTMie+QRUy00RzwBn1qubNTtt8z5J+LykqlbcWSWfjGarNr4c40I+RdrI+Fi1r/wCs2ql0kvYYapTaaz9lT2EeMuwTp//kyVDUxaaHmBGaN1Ai7DQz44yKAwAnStWFP/lAuxLReQFp4A8wg9e22irkvC3bIMgpKUIheo/58WD03roH5IQsfIsY7oveODIR5s+T1lmIYBBH0IXZqwDOQpArcy82RMMCme6unhJZPIsWqSlVAEWtD89muXdnpvQRFH88exZ1v3WiYYnlJruFoGz7Yi19nrvYI9gkhoee5Idi2m1w1LmDw8EQ==';
const enc = decodeAndEncode(message)
I'm using the following code:
GcsService gcsService = GcsServiceFactory.createGcsService();
GcsFilename filename = new GcsFilename(BUCKETNAME, fileName);
GcsFileOptions options = new GcsFileOptions.Builder()
.mimeType(contentType)
.acl("public-read")
.addUserMetadata("myfield1", "my field value")
.build();
#SuppressWarnings("resource")
GcsOutputChannel outputChannel =
gcsService.createOrReplace(filename, options);
outputChannel.write(ByteBuffer.wrap(byteArray));
outputChannel.close();
The problem is that when I try to store video files, I have to store the file in the byteArray which could cause memory issues.
But I cannot find any interface to do the same with stream.
questions:
Should I worry about mem issues in the appengine srv, or are they capable of keeping a 1 min video in mem?
is it possible to use stream instead of byte array? how?
I'm reading the bytes as byte[] byteArray = IOUtils.toByteArray(stream); should I use the byte array as a real buffer and just read chunks and upload them to the GCS? how do I do that?
The amount of memory available depends on the appengine instance type you've configured. Streaming this data seems like a good idea if you can.
Not sure about the GcsService api, but looks like you can do this using the gcloud Storage api:
https://github.com/GoogleCloudPlatform/gcloud-java/blob/master/gcloud-java-storage/src/main/java/com/google/cloud/storage/Storage.java
This code might work (untested)...
final BlobInfo info = BlobInfo.builder(bucket.getBucketName(), "name").contentType("image/png").build();
final ReadableByteChannel src = Channels.newChannel(stream);
final WriteChannel dst = gcsStorage.writer(info);
fastChannelCopy(src, dst);
private void fastChannelCopy(final ReadableByteChannel src, final WritableByteChannel dest) throws IOException {
final ByteBuffer buffer = ByteBuffer.allocateDirect(16 * 1024);
while (src.read(buffer) != -1) {
buffer.flip(); // prepare the buffer to be drained
dest.write(buffer); // write to the channel, may block
// If partial transfer, shift remainder down
// If buffer is empty, same as doing clear()
buffer.compact();
}
// EOF will leave buffer in fill state
buffer.flip();
// make sure the buffer is fully drained.
while (buffer.hasRemaining()) {
dest.write(buffer);
}
}
I was trying to build an encryption program in python 2.7. It would read the binary from a file and then use a key to encrypt it. However, I quickly ran into a problem. Files like image files and executables read as hex values. However, text files do not using open(). Even if i run
file=open("myfile.txt", "rb")
out=file.read()
it still comes out as just text. I'm on windows 7, not linux which i think may make a difference. Is there any way i could read the binary from ANY file (including text files), not just image and executable files?
Even when reading a file with the 'rb' flag,
if your file has the byte '\x41' it will be printed as the letter 'A' in the console.
If you want the hex values, encode the file content as hex, which means:
content = open('text.txt', 'rb').read()
# Since python 3.5:
hex = content.hex()
# else:
hex = content.encode('hex')
Take a look at below code .also it has many points for you
from hashlib import md5
from Crypto.Cipher import AES
from Crypto import Random
def derive_key_and_iv(password, salt, key_length, iv_length):
d = d_i = ''
while len(d) < key_length + iv_length:
d_i = md5(d_i + password + salt).digest()
d += d_i
return d[:key_length], d[key_length:key_length+iv_length]
def encrypt(in_file, out_file, password, key_length=32):
bs = AES.block_size
salt = Random.new().read(bs - len('Salted__'))
key, iv = derive_key_and_iv(password, salt, key_length, bs)
cipher = AES.new(key, AES.MODE_CBC, iv)
out_file.write('Salted__' + salt)
finished = False
while not finished:
chunk = in_file.read(1024 * bs)
if len(chunk) == 0 or len(chunk) % bs != 0:
padding_length = (bs - len(chunk) % bs) or bs
chunk += padding_length * chr(padding_length)
finished = True
out_file.write(cipher.encrypt(chunk))
def decrypt(in_file, out_file, password, key_length=32):
bs = AES.block_size
salt = in_file.read(bs)[len('Salted__'):]
key, iv = derive_key_and_iv(password, salt, key_length, bs)
cipher = AES.new(key, AES.MODE_CBC, iv)
next_chunk = ''
finished = False
while not finished:
chunk, next_chunk = next_chunk, cipher.decrypt(in_file.read(1024 * bs))
if len(next_chunk) == 0:
padding_length = ord(chunk[-1])
chunk = chunk[:-padding_length]
finished = True
out_file.write(chunk)
Usage
with open(in_filename, 'rb') as in_file, open(out_filename, 'wb') as out_file:
encrypt(in_file, out_file, password)
with open(in_filename, 'rb') as in_file, open(out_filename, 'wb') as out_file:
decrypt(in_file, out_file, password)
Your binary file is coming out looking like text because the file is being treated like it is encoded in an 8 bit encoding (ASCII or Latin-1, etc). Also, in Python 2, bytes and (text) characters are used interchangeably... i.e. a string is just an array of ASCII bytes.
You should search the differences between python 2 and 3 text encoding and you will quickly see why anomalies such as you are encountering can develop. Most of the Python 2 version encryption modules use the python byte strings.
Your "binary" non-text files are not really being treated any differently from the text ones; they just don't map to an intelligible coding that you recognize, whereas the text ones do.
I am trying to build and send 802.11 frames in C. I saw that it is possible to do it with pcap for instance. However, in all example I saw, I have to set myself sequence number and other control fields. So I'm wondering if there is an API that permits to manage all this control part and where I only have to specify addresses ?
Thank you in advance,
First of all what type of packet are you trying to make? Data frames or beacon frames.....
If you are creating from the scratch you need to set all the parameters yourself, i am using this header file
[ieee80211]http://lxr.free-electrons.com/source/include/linux/ieee80211.h
for the 802.11 configuration parameters, and for radiotap headers , you can use
[radiotap]http://lxr.free-electrons.com/source/include/net/ieee80211_radiotap.h
i can give you an example code(part of) of creating beacon frames
// Add the radiotap header
radiotap->it_version = 0;
radiotap->it_len = sizeof(*radiotap) + sizeof(dataRate);
radiotap->it_present = (1 << IEEE80211_RADIOTAP_RATE);
// Beacon packet flags
dot80211->i_fc[0] = IEEE80211_FC0_VERSION_0 | IEEE80211_FC0_TYPE_MGT | IEEE80211_FC0_SUBTYPE_BEACON;
dot80211->i_fc[1] = IEEE80211_FC1_DIR_NODS;
dot80211->i_dur[0] = 0x0;
dot80211->i_dur[1] = 0x0;
// Destination = broadcast (no retries)
memcpy( dot80211->i_addr1, mac_Destination, IEEE80211_ADDR_LEN );
// Source = our own mac address
memcpy( dot80211->i_addr2, mac_source, IEEE80211_ADDR_LEN );
// BSS = our mac address
memcpy( dot80211->i_addr3, mac_BSSID, IEEE80211_ADDR_LEN );
// Sequence control: Automatically set by the driver
beacon -> beacon_timestamp = TimeStamp;
printf("%"PRIu64,beacon -> beacon_timestamp);
// interval = 100 "time units" = 102.4 ms
// Each time unit is equal to 1024 us
beacon->beacon_interval = BEACON_INTERVAL;
// capabilities = sent by ESS
//beacon->beacon_capabilities = x0003;
For Example :
dot80211->i_fc[0] = IEEE80211_FC0_VERSION_0 | IEEE80211_FC0_TYPE_MGT | IEEE80211_FC0_SUBTYPE_BEACON;
You can the set the FCS(frame control) like this, which is defined in the header.
My suggestion is to create your own header files or modify it according to your need..
Hope this helps ??