I have the following code which i am trying to achieve one hot encoding.
k=tf.Variable(tf.zeros((10,1)))
hprev=tf.Variable(tf.zeros((10,1)))
x=tf.placeholder(tf.int32,shape=None,name="x")
y_op =tf.assign(k, k[x,0].assign(1))
M_c=tf.concat((hprev,y_op),axis=0)
init=tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
print(sess.run(y_op,feed_dict={x:1}))
print(M_c.eval())
I get the error:You must feed a value for placeholder tensor 'x_64' with dtype int32. Yet I have passed 1 as the value, which in my understanding is an integer. What I am doing wrong ?. I am still a beginner please.
Related
I have an empty unicode array:
a = np.array([], dtype=np.str_)
I want to encode it:
b = np.char.encode(a, encoding='utf8')
Why is the result an empty array with dtype=float64?
# array([], dtype=float64)
If the array is not empty the resulting array is a properly encoded array with dtype=|S[n]:
a = np.array(['ss', 'ff☆'], dtype=np.str_)
b = np.char.encode(a, encoding='utf8')
# array([b'ss', b'ff\xe2\x98\x86'], dtype='|S5')
EDIT: The accepted answer below does, in fact, answer the question as posed but if you come here looking for a workaround, here is what I did:
if array.size == 0:
encoded_array = np.chararray((0,))
else:
encoded_array = np.char.encode(a, encoding='utf8')
This will produce an empty encoded array with dtype='|S1' if your decoded array is empty.
The source of numpy.char.encode is available here. It basically calls _vec_string which returns an empty array of type np.object_ in this case. This result is provided to _to_string_or_unicode_array which builds the final array and determine its type. Its code is available here. It basically converts the Numpy array to a list so to then provide it to np.asarray. The goal of this operation is to determine the type of the array but the thing is that empty arrays have a default type of np.float64 by convention (I think it is because Numpy was initially design for physicist who usually work with np.float64 arrays). This result is quite unexpected in this context, but the "S0" does not exists and I am not sure everyone would agree that the "S1" type is better here (still, it is certainly better than np.float64). Feel free to fill an issue on the GitHub Numpy repository so to start a discussion about this behaviour.
I have a problem with percent.encode() in package:convert/convert.dart package.
I have an API that is used by the Arabs and can contain Arabic characters. One of the Arabic characters is "خ" and if I want to convert it with this method percent.encode('خ'.codeUnits). The code unit number is 1582 which represents 0x62e in hexadecimal. In this case, I will get an exception because it's out of range of the bytes that this library can convert. and I have this exception Unhandled Exception: FormatException: Invalid byte 0x62. Can you please help me with my problem? are there any alternatives I can use?
I have found a solution, I've used Uri.encodeQueryComponent(data). It did the trick.
[Update 1]
There is an alternative way
percent.encode(utf8.encode('خ'))
Problem:
Hello, I have been struggling recently in my programming endeavours. I have managed to receive the output below from Google Speech to Text, but I cannot figure out how draw data from this block.
Excerpt 1:
[VoiceMain]: Successfully initialized
{"result":[]}
{"result":[{"alternative":[{"transcript":"hello","confidence":0.46152416},{"transcript":"how low"},{"transcript":"how lo"},{"transcript":"how long"},{"transcript":"Polo"}],"final":true}],"result_index":0}
[VoiceMain]: Successfully initialized
{"result":[]}
{"result":[{"alternative":[{"transcript":"hello"},{"transcript":"how long"},{"transcript":"how low"},{"transcript":"howlong"}],"final":true}],"result_index":0}
Objective:
My goal is to extract the string "hello" (without the quotation marks) from the first transcript of each block and set it equal to a variable. The problem arises when I do not know what the phrase will be. Instead of "hello", the phrase may be a string of any length. Even if it is a different string, I would still like to set it to the same variable to which the phrase "hello" would have been set to.
Furthermore, I would like to extract the number after the word "confidence". In this case, it is 0.46152416. Data type does not matter for the confidence variable. The confidence variable appears to be more difficult to extract from the blocks because it may or may not be present. If it is not present, it must be ignored. If it is present however, it must be detected and stored as a variable.
Also please note that this text block is stored within a file named "CurlOutput.txt".
All help or advice related to solving this problem is greatly appreciated.
You could do this with regex, but then I am assuming you will want to use this as a dict later in your code. So here is a python approach to building this result as a dictionary.
import json
with open('CurlOutput.txt') as f:
lines = f.read().splitlines()
flag = '{"result":[]} '
for line in lines: # Loop through each lin in file
if flag in line: # check if this is a line with data on it
results = json.loads(line.replace(flag, ''))['result'] # Load data as a dict
# If you just want to change first index of alternative
# results[0]['alternative'][0]['transcript'] = 'myNewString'
# If you want to check all alternative for confidence and transcript
for result in results[0]['alternative']: # Loop over each alternative
transcript = result['transcript']
confidence = None
if 'confidence' in result:
confidence = result['confidence']
# now do whatever you want with confidence and transcript.
I am using Maya to do some procedural work, and I have a lot of textures that I need to load into Maya, and they all have transparencies (alpha channels). I would very much like to be able to automate this process. Using PyMEL, I can create my textures and hook them up to a shader, but the alpha doesn't set properly by default. There is an attribute in the psdFileTex node called "Alpha to Use", and it must be set to "Transparency" in order for my alpha channel to work. My question is this - how do I use PyMEL scripting to set the "Alpha to Use" attribute properly?
Here is the code I am using to set up my textures:
import pymel.core as pm
pm.shadingNode('lambert', asShader=True, name='myShader1')
pm.sets(renderable=True, noSurfaceShader=True, empty=True, name='myShader1SG')
pm.connectAttr('myShader1.outColor', 'myShader1SG.surfaceShader', f=True)
pm.shadingNode('psdFileTex', asTexture=True, name='myShader1PSD')
pm.connectAttr('myShader1PSD.outColor', 'myShader1.color')
pm.connectAttr('myShader1PSD.outTransparency', 'myShader1.transparency')
pm.setAttr('myShader1ColorPSD.fileTextureName', '<pathway>/myShader1_texture.psd', type='string')
If anyone can help me, I would really appreciate it.
Thanks
With any node, you can use listAttr() to get the available editable attributes. Run listAttr('myShaderPSD'), note in it's output, there will be two attributes called 'alpha' and 'alphaList'. Alpha, will return you the current selected alpha channel. AlphaList will return you however many alpha channels you have in your psd.
Example
pm.PyNode('myShader1PSD').alphaList.get()
# Result: [u'Alpha 1', u'Alpha 2'] #
If you know you'll only ever be using just the one alpha, or the first alpha channel, you can simply do this.
psdShader = pm.PyNode('myShader1PSD')
alphaList = psdShader.alphaList.get()
if (len(alphaList) > 0):
psdShader.alpha.set(alphaList[0])
else:
// No alpha channel
pass
Remember that lists start iterating from 0, so our first alpha channel will be located at position 0.
Additionally and unrelated, while you're still using derivative commands of the maya.core converted for Pymel, there's still some commands you can use to help make your code read nicer.
pm.setAttr('myShader1ColorPSD.fileTextureName', '<pathway>/myShader1_texture.psd', type='string')
We can convert this to pymel like so:
pm.PyNode('myShader1ColorPSD').fileTextureName.set('<pathway>/myShader1_texture.psd')
And:
pm.connectAttr('myShader1PSD.outColor', 'myShader1.color')
Can be converted to:
pm.connect('myShader1PSD.outColor', 'myShader1.color')
While they may only be small changes, it reads just the little bit nicer, and it's native PyMel.
Anyway, I hope I have helped you!
I'm trying to query my PostGis database thanks to geoDjango but I have an error where I found no solution on the internet.
close_loc=PlanetOsmPoint.objects.get(way__distance_lte=(lePoint, D(**distance_from_point)))
Whatever I try on the result (close_loc) with a print, I have this error :
django.db.utils.DatabaseError: Only lon/lat coordinate systems are supported in geography.
I tried to convert it to a correct format thanks to transform(SRID) but nothing was solved, still the same problem.
Here's some informations :
Transformation :
sr1=SpatialReference('54004')
sr2=SpatialReference('NAD83')
ct=CoordTransform(sr1, sr2)
What I'm doing after getting the close_loc :
close_loc.transform(ct)
print close_loc[0]
close_loc type is GeoQuerySet.
How can I exploit this result ?
The transform() function expects an integer, not a string. The correct syntax is:
close_loc.transform( new_srid_number )
In your case, something like this:
close_loc.transform(54004)
Hope it'll work !