Imagenet ILSVRC2013 submission failed - artificial-intelligence

Recently, i have submitted classification challenge on imagenet ILSVRC2013
Unfortunately, i received an email like this:
Dear participant,
We received your submission at 2022-12-14 01:39:08 for the classification challenge.
However, it was found that your submission did not conform to the specifications we mentioned. We were therefore unable to evaluate your submission. Please read the ILSVRC 2013 page for more details.
ILSVRC 2013 team
The contents of my submission file are as follows, including 100,000 rows of data
`771 778 794 387 650
363 691 764 923 427
737 369 430 531 124`
Want to know where my submission went wrong.
Any help is greatly appreciated!

Related

Create a table in Data Studio that shows current/latest value and a previous value and compares them

I have two sheets in a Google Sheet that has historical data for some internal programs.
programs
high level data about each program
each program has data for all of the different report dates
metrics
specific metrics about each program for each report date
Using my example data, there are 4 programs: a, b, c, and d and I have reports for 1/1/2020, 1/15/2020, 2/3/2020, and 6/20/2020.
I want to create a Data Studio report that will:
combine the data on report date and program
show a filter where the user can select which previous report date they want to compare against
the filter should show all of the previous report dates and default select the most recent one
for example, the filter would show:
1/1/2020
1/15/2020
2/3/2020 (default selected)
the filter should only allow selecting one
a table showing values for current report date and values for the report date selected in the above filter
Here is an example table using the source data in my above linked Google Sheet when the report is initially loaded and the filter has 2/3/2020 default selected:
report date
program
id
l1 manager
status
current value
previous report date value
direction
6/20/2020
a
1
Carlos
bad
202
244
up
6/20/2020
b
2
Jack
bad
202
328
up
6/20/2020
c
3
Max
bad
363
249
down
6/20/2020
d
4
Henry
good
267
284
up
If the user selects 1/1/2020 in the filter, then the table would show:
report date
program
id
l1 manager
status
current value
previous report date value
direction
6/20/2020
a
1
Carlos
bad
202
220
up
6/20/2020
b
2
Jack
bad
202
348
up
6/20/2020
c
3
Max
bad
363
266
down
6/20/2020
d
4
Henry
good
267
225
down

How to compare and select non-changing variables in panel data

I have unbalanced panel data and need to exclude observations (in t) for which the income changed during the year before (t-1), while keeping other observations of these people. Thus, if a change in income happens in year t, then year t should be dropped (for that person).
clear
input year id income
2003 513 1500
2003 517 1600
2003 518 1400
2004 513 1500
2004 517 1600
2004 518 1400
2005 517 1600
2005 513 1700
2005 518 1400
2006 513 1700
2006 517 1800
2006 518 1400
2007 513 1700
2007 517 1600
2007 518 1400
2008 513 1700
2008 517 1600
2008 518 1400
end
xtset id year
xtline income, overlay
To illustrate what's going on, I add a xtline plot, which follows the income per person over the years. ID=518 is the perfect non-changing case (keep all obs). ID=513 has one time jump (drop year 2005 for that person). ID=517 has something like a peak, perhaps one time measurement error (drop 2006 and 2007).
I think there should be some form of loop. Initialize the first value for each person (because this cannot be compared), say t0. Then compare t1-t0, drop if changed, else compare t2-t1 etc. Because data is unbalanced there might be missing year-obervations. Thanks for advice.
Update/Goal: The purpose is prepare the data for a fixed effects regression analysis. There is another variable, reported for the entire "last year". Income however is reported at interview date (point in time). I need to get close to something like "last year income" to relate it to this variable. The procedure is suggested and followed by several publications. I try to replicate and understand it.
Solution:
bysort id (year) : drop if income != income[_n-1] & _n > 1
bysort id (year) : gen byte flag = (income != income[_n-1]) if _n > 1
list, sepby(id)
The procedure is VERY IFFY methodologically. There is no need to prepare for the fixed effects analysis other than xtsetting the data; and there rarely is any excuse to create missing data... let alone do so to squeeze the data into the limits of what (other) researchers know about statistics and econometrics. I understand that this is a replication study, but whatever you do with your replication and wherever you present it, you need to point out that the original authors did not have much clue about regression to begin with. Don't try too hard to understand it.

Sagemaker: DeepAR Hyperparameter Tuning Error

Running into a new issue with tuning DeepAR on SageMaker when trying to initialize a hyperparameter tuning job - this error also occurs when calling the test:mean_wQuantileLoss. I've upgraded the sagemaker package, restarted my instance, restarted the kernel (using a juptyer notebook), and yet the problem persists.
ClientError: An error occurred (ValidationException) when calling the
CreateHyperParameterTuningJob operation: The objective metric type, [Maximize], that you specified for objective metric, [test:RMSE], isn’t valid for the [156387875391.dkr.ecr.us-west-2.amazonaws.com/forecasting-deepar:1] algorithm. Choose a valid objective metric type.
Code:
my_tuner = HyperparameterTuner(estimator=estimator,
objective_metric_name="test:RMSE",
hyperparameter_ranges=hyperparams,
max_jobs=20,
max_parallel_jobs=2)
# Start hyperparameter tuning job
my_tuner.fit(inputs=data_channels)
Stack Trace:
ClientError Traceback (most recent call last)
<ipython-input-66-9d6d8de89536> in <module>()
7
8 # Start hyperparameter tuning job
----> 9 my_tuner.fit(inputs=data_channels)
~/anaconda3/envs/python3/lib/python3.6/site-packages/sagemaker/tuner.py in fit(self, inputs, job_name, include_cls_metadata, **kwargs)
255
256 self._prepare_for_training(job_name=job_name, include_cls_metadata=include_cls_metadata)
--> 257 self.latest_tuning_job = _TuningJob.start_new(self, inputs)
258
259 #classmethod
~/anaconda3/envs/python3/lib/python3.6/site-packages/sagemaker/tuner.py in start_new(cls, tuner, inputs)
525 output_config=(config['output_config']),
526 resource_config=(config['resource_config']),
--> 527 stop_condition=(config['stop_condition']), tags=tuner.tags)
528
529 return cls(tuner.sagemaker_session, tuner._current_job_name)
~/anaconda3/envs/python3/lib/python3.6/site-packages/sagemaker/session.py in tune(self, job_name, strategy, objective_type, objective_metric_name, max_jobs, max_parallel_jobs, parameter_ranges, static_hyperparameters, image, input_mode, metric_definitions, role, input_config, output_config, resource_config, stop_condition, tags)
348 LOGGER.info('Creating hyperparameter tuning job with name: {}'.format(job_name))
349 LOGGER.debug('tune request: {}'.format(json.dumps(tune_request, indent=4)))
--> 350 self.sagemaker_client.create_hyper_parameter_tuning_job(**tune_request)
351
352 def stop_tuning_job(self, name):
~/anaconda3/envs/python3/lib/python3.6/site-packages/botocore/client.py in _api_call(self, *args, **kwargs)
312 "%s() only accepts keyword arguments." % py_operation_name)
313 # The "self" in this scope is referring to the BaseClient.
--> 314 return self._make_api_call(operation_name, kwargs)
315
316 _api_call.__name__ = str(py_operation_name)
~/anaconda3/envs/python3/lib/python3.6/site-packages/botocore/client.py in _make_api_call(self, operation_name, api_params)
610 error_code = parsed_response.get("Error", {}).get("Code")
611 error_class = self.exceptions.from_code(error_code)
--> 612 raise error_class(parsed_response, operation_name)
613 else:
614 return parsed_response
ClientError: An error occurred (ValidationException) when calling the CreateHyperParameterTuningJob operation:
The objective metric type, [Maximize], that you specified for objective metric, [test:RMSE], isn’t valid for the [156387875391.dkr.ecr.us-west-2.amazonaws.com/forecasting-deepar:1] algorithm.
Choose a valid objective metric type.
It looks like you are trying to maximize this metric, test:RMSE can only be minimized by SageMaker HyperParameter Tuning.
To achieve this in the SageMaker Python SDK, create your HyperparameterTuner with objective_type='Minimize'. You can see the signature of the init method here.
Here is the change you should make to your call to HyperparameterTuner:
my_tuner = HyperparameterTuner(estimator=estimator,
objective_metric_name="test:RMSE",
objective_type='Minimize',
hyperparameter_ranges=hyperparams,
max_jobs=20,
max_parallel_jobs=2)

How to sum two columns of different tables in sql

I created two tables book and cd
**Books** **Book_id** **author** **publisher** **rate**
angular2 132 venkat ts 1900
angular 160 venkat ts 1500
html 5 165 henry vk 1500
html 231 henry vk 2500
css 256 mark adobe 1600
java 352 john gulberg 4500
c# 450 henry adobe 1600
jsp 451 henry vk 2500
ext js 555 kv venkat w3 5102
html 560 kv venkat gulberg 5000
java2 561 john gulberg 9500
java8 651 henry vk 1650
js 654 henry ts 2500
java 777 babbage adobe 5200
phython 842 john ts 1500
spring 852 henry w3 6230
spring 895 mark tut 4250
ext js 965 henry gulberg 4500
book_id Cd_name Cd_price
132 angular2 500
132 angular1 600
132 angular basics 600
132 angular expert 900
160 begineer_course 1200
160 angular_templates 500
165 html_tutorials 900
165 bootstrap 1000
256 css styles 650
256 expert css 900
555 extjs 1200
555 exjs_applications 500
777 core java 2500
777 java swing 4500
777 java tutorials 1500
842 phython 650
852 spring 900
852 spring mvc 900
In the above two tables i want to join the books,author,cd_name and the total cost of book and cd for each id.
Expected Output
Books Book_id author cd_name total price
angular2 132 venkat angular2 2400
angular2 132 venkat angular basics 2100
angular2 132 venkat angular expert 2800
java 777 babbage core java 7700
Like the above result i need to get the total cost for all the books and cd
In case not all books had a CD:
SELECT A.Books
, A.Book_ID
, A.Author
, B.CD_Name
, A.rate+COALESCE(B.Cd_price,0) AS TOTAL_PRICE
FROM BOOK A
LEFT JOIN CD B ON A.BOOK_ID = B.BOOK_ID
Question's author evidenced that "The table name is book not Books"
I used originally BOOKS as a "suggestion" because (usually) tables name are plural.
Try this
select books, b.book_id, author, cd_name, (b.rate+c.cd_price) as total_price from book b
join cd c on b.book_id = c.book_id

Solr returning wrong documents while searching field containing dots in solr.StrField

Field Type:
fieldType name="StrCollectionField" class="solr.StrField" omitNorms="true" multiValued="true" docValues="true"
field name="po_line_status_code" type="StrCollectionField" indexed="true" stored="true" required="false" docValues="false"
po_no is PK
Index value: po_line_status_code:[3700.100]
Search Query: po_line_status_code:(1100.200 1100.500 1100.600 1100.400 1100.300 1100.750 1100.450)
Result:
Getting Results with po_line_status_code: [3700.100] as well.
Does Solr internally tokenize solr.StrField containing dots or is some regular expression matching going on here? Sounds like a bug to me.
We don't get this document, when we change the query to one of the following
1> po_line_status_code:(1200.200 1200.500 1200.600 1200.400 1200.300 1200.750 1200.450)
2> po_line_status_code:(1100.200 1100.500 1100.600 1100.400 1100.300 1100.750 1100.450) AND po_no:938792842
We are using DSE version: 4.7.4 having Apache Solr 4.10.3.0.203.
Debug Query Output from one the servers which is returning wrong documents:
response={numFound=2,start=0,docs=[SolrDocument{po_no=4575419580094, po_line_status_code=[3700.4031]}, SolrDocument{po_no=1575479951283, po_line_status_code=[3700.100]}]},debug={rawquerystring=po_line_status_code:(3 1100.200 29 5 6 1100.300 63 199 1100.500 200 1100.600 198 1100.400 343 344 345 346 347 409 410 428 1100.750 1100.450) ,querystring=po_line_status_code:(3 1100.200 29 5 6 1100.300 63 199 1100.500 200 1100.600 198 1100.400 343 344 345 346 347 409 410 428 1100.750 1100.450)]
I also see the below thing in the response which I believe has something do with ranking or so:
No match on required clause (po_line_status_code:3 po_line_status_code:1100.200 po_line_status_code:29 po_line_status_code:5 po_line_status_code:6 po_line_status_code:1100.300 po_line_status_code:63 po_line_status_code:199 po_line_status_code:1100.500 po_line_status_code:200 po_line_status_code:1100.600 po_line_status_code:198 po_line_status_code:1100.400 po_line_status_code:343 po_line_status_code:344 po_line_status_code:345 po_line_status_code:346 po_line_status_code:347 po_line_status_code:409 po_line_status_code:410 po_line_status_code:428 po_line_status_code:1100.750 po_line_status_code:1100.450)\n 0.0 = (NON-MATCH) product of:\n 0.0 = (NON-MATCH) sum of:\n 0.0 = coord(0/23)\n 0.015334824
Also, could it be something to do with re-indexing? If I re-index my documents will it fix the issue?
The links to doc file containing solr schema and solr config can be found here
I've had to put this in an answer as the comments won't allow formatting.
No it's not a version problem or a tokenizer problem or a bug in solr.
solr.StrField won't tokenize on either analysis or query. It is matching on something else. Can you post solrconfig.xml and schema.xml?
If you are searching on po_line_status_code this is the debug you should see:
"querystring": " po_line_status_code:(1100.200 1100.500 1100.600 1100.400 1100.300 1100.750 1100.450)",
"parsedquery": "(+(po_line_status_code:1100.200 po_line_status_code:1100.500 po_line_status_code:1100.600 po_line_status_code:1100.400 po_line_status_code:1100.300 po_line_status_code:1100.750 po_line_status_code:1100.450))/",
Whereas what you are seeing is
querystring=ship_node:610055 AND po_line_status_code:(3 1100.200 29 5 6 1100.300 63 199 1100.500 200 1100.600 198 1100.400 343 344 345 346 347 409 410 428 1100.750 1100.450) AND expected_ship_date:[2016-02-03T16:00:00.000Z TO 2016-06-09T13:59:59.059Z]
So your query string has been altered. I assume all your queries are through the solr admin tool? So that should leave DSE out of the loop.
I still wouldn't expect your query to match but things are more complicated than you have presented them as you have ship_node and expected_ship_date in your query too.
Oh the No match on required clause says that you didn't match anything with the po_line_status_code query.

Resources