I am reading the tf-DeepLab codes with backbone nas_hnasnet from the paper Auto-DeepLab: Hierarchical Neural Architecture Search for Semantic Image Segmentation published at CVPR 2019. I have confused with the network layer defined as "backbone=[0, 0, 0, 1, 2, 1, 2, 2, 3, 3, 2, 1]" in nas_network.py. Is it supposed to be sought automatically? Why it is pre-defiend and not changed along the training?
Related
I have added the following two equations to conditional formatting:
Green: =IF(REGEXMATCH(VLOOKUP(X2, INDIRECT("DEALS!$A$2:F"),5, FALSE), "Likes"), R2>=VLOOKUP(X2, INDIRECT("DEALS!$A$2:F"), 4, FALSE), T2>=VLOOKUP(X2, INDIRECT("DEALS!$A$2:F"), 4, FALSE))
Red: =IF(REGEXMATCH(VLOOKUP(X2, INDIRECT("DEALS!$A$2:F"),5, FALSE), "Likes"), NOT(R2>=VLOOKUP(X2, INDIRECT("DEALS!$A$2:F"), 4, FALSE)), NOT(T2>=VLOOKUP(X2, INDIRECT("DEALS!$A$2:F"), 4, FALSE)))
The colors should change accordingly depending on whether the target (views in this case) has been met or not.
Below I have also added the equation into the cells to check if the logic is correct, which it appears to be (left = green logic, right = red logic).
For whatever reason, the first row, despite the target not being met, has decided to select the green color. The row below that is doing the complete opposite. And to top it all off, the last two rows are not selecting a color at all even though I have applied the conditional formatting to the entire column:
I am also experiencing weird behavior when dragging equations within this P column, but do not see this same behavior in other columns that also use conditional formatting:
https://i.gyazo.com/5e002e3d08e8337591573b81d9fc92e2.mp4
This has left me completely baffled, and I am not sure what is going on since the equation's logic does not appear to be the issue.
Appreciate any help I can get with this issue!
For reference, here is the other sheet that the VLOOKUP() function is grabbing from:
do not lock ($) references inside INDIRECT. if stuff is between double quotes it's a text string, not an active reference, and text strings are not affected by dragging.
for green use:
=IF(REGEXMATCH(VLOOKUP(Z2, INDIRECT("DEALS!A2:F"), 5, 0), "Likes"),
R2>=VLOOKUP(Z2, INDIRECT("DEALS!A2:F"), 4, 0),
T2>=VLOOKUP(Z2, INDIRECT("DEALS!A2:F"), 4, 0))
for red use:
=IF(REGEXMATCH(VLOOKUP(Z2, INDIRECT("DEALS!A2:F"), 5, 0), "Likes"),
NOT(R2>=VLOOKUP(Z2, INDIRECT("DEALS!A2:F"), 4, 0)),
NOT(T2>=VLOOKUP(Z2, INDIRECT("DEALS!A2:F"), 4, 0)))
demo sheet
update:
don't drag anything. use this in P2:
=ARRAYFORMULA(IFNA(TEXT(VLOOKUP(Z2:Z,DEALS!A2:F,4,0),
"#,###,##0")& " " &VLOOKUP(Z2:Z,DEALS!A2:F,5,0)))
In TensorFlow Core for Python there is an operation called tf.math.scalar_mul.
I would like to scale tensors in TensorFlow.js. By trying for instance a * 0.1 I get an error message (at least from Typescript):The left-hand side of an arithmetic operation must be of type 'any', 'number', 'bigint' or an enum type.ts(2362).
Is scaling tensors applicable without making them arrays, scale elementwise then transform back to tensors?
Though, tf.scalar can be used, one can also use direclty tensor.mul(number) like the following
tf.tensor([1, 2, 3, 4]).mul(5).print(); // [5, 10, 15, 20]
I found the answer in the API documentation. For multiplying a tensor a with 5, just a.mul(tf.scalar(5)).
I am trying to perform a spatial convolution (e.g. on an image) in pytorch on dense input using a sparse filter matrix.
Sparse Tensors are implemented in PyTorch. I tried to use a sparse Tensor, but it ends up with a segmentation fault.
import torch
from torch.autograd import Variable
from torch.nn import functional as F
# build sparse filter matrix
i = torch.LongTensor([[0, 1, 1],[2, 0, 2]])
v = torch.FloatTensor([3, 4, 5])
filter = Variable(torch.sparse.FloatTensor(i, v, torch.Size([3,3])))
inputs = Variable(torch.randn(1,1,6,6))
F.conv2d(inputs, filter)
Can anyone just give me a hint how to do that?
Thanks in advance!
dymat
I know this question is outdated but I also know that there are still people looking for an answer (like myself) so here goes...
On sparse filters
If you'd like sparse convolution without the freedom to specify the sparsity pattern yourself, take a look at dilated conv (also called atrous conv). This is implemented in PyTorch and you can control the degree of sparsity by adjusting the dilation param in Conv2d.
If you'd like to specify the sparsity pattern yourself, to the best of my knowledge, this feature is not currently available in PyTorch. But you may want to check this out if you are ok with using Tensorflow. There is also a blog post providing more details on this repo.
On sparse input
A list of existing and TODO sparse tensor operations is available here.
This talks about the current state of sparse tensors in PyTorch.
This lets you propose your own sparse tensor use case to the PyTorch contributors.
But at the time of this writing, I did not see conv on sparse tensors being an implemented feature or on the TODO list. nn.Linear on sparse input, however, is supported.
And if you build a sparse tensor and apply a conv layer to it, PyTorch (1.1.0) throws an exception:
>>> a = torch.zeros((1, 3, 2, 2), layout=torch.sparse_coo)
>>> net = torch.nn.Conv2d(1, 1, 1)
>>> b = net(a)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 338, in forward
self.padding, self.dilation, self.groups)
RuntimeError: sparse tensors do not have is_contiguous
>>> torch.__version__
'1.1.0'
Changing to a linear layer and it would work:
>>> c = torch.zeros((1, 2), layout=torch.sparse_coo)
>>> another_net = torch.nn.Linear(2, 1)
>>> d = another_net(c)
>>> d
tensor([[0.1944]], grad_fn=<AddmmBackward>)
>>> d.backward()
>>> another_net.weight.grad
tensor([[0., 0.]])
>>> another_net.bias.grad
tensor([1.])
these guys did something like a sparse conv2d - https://github.com/numenta/nupic.torch/
I have used many tools such as cJSON, nxjson and jsmn parsers to parse the JSON response but none of the tools which i had used is giving the output in some structure format. Below is my JSON response in string:
{"Code":1,"MSN":0,"HWID":7001,"Data":{"SignOffRequest":{"messageId":0,"barCodeReadErrorCnt":0,"markSenseReadErrorCnt":0,"markSenseValidationErrorCnt":0,"postPrintErrorCnt":0,"custTicketFeedErrorCnt":0,"custInputTicketJamsCnt":0,"keyStrokeCnt":0,"keyStrokeErrorCnt":0,"commCrcErrorCnt":0,"readTxnCnt":0,"keyedTxnCnt":0,"ticketMotionErrorCnt":0,"blankFeedErrorCnt":0,"blankTicketJamCnt":0,"startupNormalRespCnt":0,"startupErrorRespCnt":0,"primCommMesgSentCnt":0,"commRetransmitTxnCnt":0,"rawMessage":null,"TableUpdateNo":1,"FixtureUpdateNo":0}}}
I have used cJSON tool and the output is as below which is in also a string:
{
"Code": 1,
"MSN": 0,
"HWID": 7001,
"Data": {
"SignOffRequest": {
"messageId": 0,
"barCodeReadErrorCnt": 0,
"markSenseReadErrorCnt": 0,
"markSenseValidationErrorCnt": 0,
"postPrintErrorCnt": 0,
"custTicketFeedErrorCnt": 0,
"custInputTicketJamsCnt": 0,
"keyStrokeCnt": 0,
"keyStrokeErrorCnt": 0,
"commCrcErrorCnt": 0,
"readTxnCnt": 0,
"keyedTxnCnt": 0,
"ticketMotionErrorCnt": 0,
"blankFeedErrorCnt": 0,
"blankTicketJamCnt": 0,
"startupNormalRespCnt": 0,
"startupErrorRespCnt": 0,
"primCommMesgSentCnt": 0,
"commRetransmitTxnCnt": 0,
"rawMessage": null,
"TableUpdateNo": 1,
"FixtureUpdateNo": 0
}
}
}
but I don't want the output in the above format. I want the output in the form of a C structure. Is it possible to get the output in C structure?
You need to add explicit code extracting from parsed JSON values the relevant fields, etc... This cannot be magically automated in general.
Some JSON libraries slightly facilitate this task. For instance jansson has a quite useful json_unpack function with which you could extract (in a single call) some fields from a parsed JSON value.
But it is your responsability to code the extraction and the validation of information from the JSON value, because only you can know what that JSON really means.
JSON is simply a convenient textual serialization format. It is up to you to give actual meaning to the data. It is also up to you to decide what kind of validation you want to code (at what degree do you trust the emitter of that JSON data?). If the data is coming from the Internet (e.g. AJAX queries, etc...) you should trust it as less as possible and validate it as much as possible.
Don't forget to document the meaning of the JSON data.
I have two arraylist, x[],y[]. Suppose :
x[0]= 1, y[0]=2,
x[1]= 3, y[1]=3,
x[2]= 4, y[2]=6,
x[3]= 4, y[3]=9,
x[4]= 7, y[4]=22,
x[5]= -4, y[5]=5,
..............
in time delay of 10 sec, the graph goes [0] to [1] and then it goes on in same delay.
How i represent the graph? I think 3d graph is must here. But how do i use it in .Net winform?
You will need to use a component to visuialize the data as a graph.
Check out Microsoft Chart Controls, they are a good option.