ClientError: Annotation value 10 found in labels. This is greater than number of classes., exit code: 2 - amazon-sagemaker

My total number of classes is 10
1-"Button_damage"
2-"Cracks"
3-"Edge_damage"
4-"Frame_damage"
5-"Hinge_damage"
6-"Screen_damage"
7-"Good_Button"
8-"Good_Hinge"
9-"Good_screen"
10-"Good_frame"
When I give num_classes=10 in hyperparameter it throws a error.
ClientError: Annotation value 10 found in labels. This is greater than number of classes., exit code: 2
I changed num_classes=9 and tried, it shows same error.

It was worked when I changed the num_classes=11. Because it adds one label "0" by default.
0-"BACKGROUND"
1-"Button_damage"
2-"Cracks"
3-"Edge_damage"
4-"Frame_damage"
5-"Hinge_damage"
6-"Screen_damage"
7-"Good_Button"
8-"Good_Hinge"
9-"Good_screen"
10-"Good_frame"

Related

how do I filter my values using an array?

I am trying to sift through some free text answers on geographical locations. As one of the steps, I want to check if the answer is any of the 290 municipalities in my country. As 290 entries would make my code cumbersome/hard to read I try saving them in an array, like below:
Data resorTEST;
keep R_res_ort_namn R_res_ort_txt R_kom_lan R_kommun;
set resor1TEST resor2TEST resor3TEST;
R_res_ort_namn=strip(lowcase(R_res_ort_namn));
R_res_ort_txt=strip(lowcase(R_res_ort_txt));
R_kom_lan=strip(lowcase(R_kom_lan));
array kommuner{290} $ ("upplands väsby" "vallentuna" "österåker" "värmdö"
"järfälla" "ekerö" "huddinge" "botkyrka" "salem"
"haninge" "tyresö" "upplands-bro" "nykvarn" "täby"
"danderyd" "sollentuna" "stockholm" "södertälje" "nacka" "sundbyberg"
"solna" "lidingö" "vaxholm" "norrtälje" "sigtuna" "nynäshamn"
"håbo" "älvkarleby" "knivsta" "heby" "tierp" "uppsala"
"enköping" "östhammar" "vingåker" "gnesta" "nyköping" "oxelösund" "flen"
"katrineholm" "eskilstuna" "strängnäs" "trosa" "ödeshög" "ydre"
"kinda" "boxholm" "åtvidaberg" "finspång" "valdemarsvik" "linköping"
"norrköping" "söderköping" "motala" "vadstena" "mjölby" "aneby"
"gnosjö" "mullsjö" "habo" "gislaved" "vaggeryd" "jönköping"
"nässjö" "värnamo" "sävsjö" "vetlanda" "eksjö" "tranås"
"uppvidinge" "lessebo" "tingsryd" "alvesta" "älmhult" "markaryd"
"växjö" "ljungby" "högsby" "torsås" "mörbylånga" "hultsfred"
"mönsterås" "emmaboda" "kalmar" "nybro" "oskarshamn" "västervik"
"vimmerby" "borgholm" "gotland" "olofström" "karlskrona" "ronneby"
"karlshamn" "sölvesborg" "svalöv" "staffanstorp" "burlöv" "vellinge"
"östra göinge" "örkelljunga" "bjuv" "kävlinge" "lomma" "svedala"
"skurup" "sjöbo" "hörby" "höör" "tomelilla" "bromölla"
"osby" "perstorp" "klippan" "åstorp" "båstad" "malmö"
"lund" "landskrona" "helsingborg" "höganäs" "eslöv" "ystad"
"trelleborg" "kristianstad" "simrishamn" "ängelholm" "hässleholm" "hylte"
"halmstad" "laholm" "falkenberg" "varberg" "kungsbacka" "härryda"
"partille" "öckerö" "stenungsund" "tjörn" "orust" "sotenäs"
"munkedal" "tanum" "dals-ed" "färgelanda" "ale" "lerum"
"vårgårda" "bollebygd" "grästorp" "essunga" "karlsborg" "gullspång"
"tranemo" "bengtsfors" "mellerud" "lilla edet" "mark" "svenljunga"
"herrljunga" "vara" "götene" "tibro" "töreboda" "göteborg"
"mölndal" "kungälv" "lysekil" "uddevalla" "strömstad" "vänersborg"
"trollhättan" "alingsås" "borås" "ulricehamn" "åmål" "mariestad"
"lidköping" "skara" "skövde" "hjo" "tidaholm" "falköping"
"kil" "eda" "torsby" "storfors" "hammarö" "munkfors"
"forshaga" "grums" "årjäng" "sunne" "karlstad" "kristinehamn"
"filipstad" "hagfors" "arvika" "säffle" "lekeberg" "laxå"
"hallsberg" "degerfors" "hällefors" "ljusnarsberg" "örebro" "kumla"
"askersund" "karlskoga" "nora" "lindesberg" "skinnskatteberg" "surahammar"
"kungsör" "hallstahammar" "norberg" "västerås" "sala" "fagersta"
"köping" "arboga" "vansbro" "malung-sälen" "gagnef" "leksand"
"rättvik" "orsa" "älvdalen" "smedjebacken" "mora" "falun"
"borlänge" "säter" "hedemora" "avesta" "ludvika" "ockelbo"
"hofors" "ovanåker" "nordanstig" "ljusdal" "gävle" "sandviken"
"söderhamn" "bollnäs" "hudiksvall" "ånge" "timrå" "härnösand"
"sundsvall" "kramfors" "sollefteå" "örnsköldsvik" "ragunda" "bräcke"
"krokom" "strömsund" "åre" "berg" "härjedalen" "östersund"
"nordmaling" "bjurholm" "vindeln" "robertsfors" "norsjö" "malå"
"storuman" "sorsele" "dorotea" "vännäs" "vilhelmina" "åsele"
"umeå" "lycksele" "skellefteå" "arvidsjaur" "arjeplog" "jokkmokk"
"överkalix" "kalix" "övertorneå" "pajala" "gällivare" "älvsbyn"
"luleå" "piteå" "boden" "haparanda" "kiruna");
/*if not missing(R_res_ort_namn) then R_kommun=prxchange("s/^.*-(.* kommun)/$1/",1,R_res_ort_namn);
else if prxmatch("/^.*([a-zA-Z]*? kommun).*$/",R_res_ort_txt) then R_kommun=prxchange("s/^.*?([a-zA-Z]*? kommun).*$/$1/",-1,R_res_ort_txt);
else if prxmatch("/^.*([a-zA-Z]*? kommun).*$/",R_kom_lan) then R_kommun=prxchange("s/^.*?([a-zA-Z]*? kommun).*$/$1/",-1,R_kom_lan);
else */if R_res_ort_txt in kommuner then R_kommun=R_res_ort_txt;
run;
However, for some reason this does not seem to work for all of the municipalities. The municipality of "uppsala" works for instance, but not the municipality of "ängelholm".
I have tried stripping the variables of whitespace and converting everything to lowercase. What am I doing wrong?
Additional info:
For some reason it does work flawlessly if I skip the array and just copy-paste the exact same list of municipality names into a parenthesis following the in-operator. I would however need to repeat this step 5-6 times and this solution would make my code quite cumbersome.
You are defining the ARRAY
Array kommuner{290} $
with character length 8. See what happens when you fix that.

Lasso via GridSearchCV: ConvergenceWarning: Objective did not converge

I am trying to find the optimal parameter of a Lasso regression:
alpha_tune = {'alpha': np.linspace(start=0.000005, stop=0.02, num=200)}
model_tuner = Lasso(fit_intercept=True)
cross_validation = RepeatedKFold(n_splits=5, n_repeats=3, random_state=1)
model = GridSearchCV(estimator=model_tuner, param_grid=alpha_tune, cv=cross_validation, scoring='neg_mean_squared_error', n_jobs=-1).fit(features_train_std, labels_train)
print(model.best_params_['alpha'])
My variables are demeaned and standardized. But I get the following error:
ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.279e+02, tolerance: 6.395e-01
I know this error has been reported several times, but none of the previous posts answer how to solve it. In my case, the error is generated by the fact that the lowerbound 0.000005 is very small, but this is a reasonable value as indicated by solving the tuning problem via the information criteria:
lasso_aic = LassoLarsIC(criterion='aic', fit_intercept=True, eps=1e-16, normalize=False)
lasso_aic.fit(X_train_std, y_train)
print('Lambda: {:.8f}'.format(lasso_aic.alpha_))
lasso_bic = LassoLarsIC(criterion='bic', fit_intercept=True, eps=1e-16, normalize=False)
lasso_bic.fit(X_train_std, y_train)
print('Lambda: {:.8f}'.format(lasso_bic.alpha_))
AIC and BIC give values of around 0.000008. How can this warning be solved?
Increasing the default parameter max_iter=1000 in Lasso will do the job:
alpha_tune = {'alpha': np.linspace(start=0.000005, stop=0.02, num=200)}
model_tuner = Lasso(fit_intercept=True, max_iter=5000)
cross_validation = RepeatedKFold(n_splits=5, n_repeats=3, random_state=1)
model = GridSearchCV(estimator=model_tuner, param_grid=alpha_tune, cv=cross_validation, scoring='neg_mean_squared_error', n_jobs=-1).fit(features_train_std, labels_train)
print(model.best_params_['alpha'])

lapply calling .csv for changes in a parameter

Good afternoon
I am currently trying to pull some data from pushshift but I am maxing out at 100 posts. Below is the code for pulling one day that works great.
testdata1<-getPushshiftData(postType = "submission", size = 1000, before = "1546300800", after= "1546200800", subreddit = "mysubreddit", nest_level = 1)
I have a list of Universal Time Codes for the beginning and ending of each day for a month. What I would like to do is get the syntax to replace the "after" and "before" values for each day and for each day to be added to the end of the pulled data. Even if it placed the data to a bunch of separate smaller datasets I could work with it.
Here is my (feeble) attempt. "links" is the data frame with the UTCs
mydata<- lapply(1:30, function(x) getPushshiftData(postType = "submission", size = 1000, after= links$utcstart[,x],before = links$utcendstart[,x], subreddit = "mysubreddit", nest_level = 1))
Here is the error message I get: Error in links$utcstart[, x] : incorrect number of dimensions
I've also tried without the "function (x)" argument and get the following message:
Error in ifelse(is.null(after), "", sprintf("&after=%s", after)) :
object 'x' not found
Can anyone help with this?

Error when checking target: expected dense_9 to have 3 dimensions, but got array with shape (2110, 1)

I want to predict stock price and I've defined my model:
model = keras.models.Sequential([
keras.layers.Dense(10, activation='relu',
input_shape=(window, train.shape[1],)),
keras.layers.Dense(5, activation='relu'),
keras.layers.Dense(1)])
model.compile(loss='mse',
optimizer=keras.optimizers.Adam(learning_rate=0.0001, beta_1=0.9,
beta_2=0.999, amsgrad=False))
history = model.fit(train_dataset, train_labels, epochs=150)
I get this error:
Error when checking target: expected dense_9 to have 3 dimensions, but got array with shape (2110, 1)
note: my input is (2110, 22, 16) but my output is (2110,1). 2110 is number of examples.
I need help to fix this.

Error in rowSums(x) : 'x' must be an array of at least two dimensions (Vegan:oecosimu)

I am trying to use the application oecosimu in Vegan package.
library(sna)
library(permute)
library(lattice)
library(vegan)
library(bipartite)
bio<-read.csv("/home/vicar66/Raw_data/Jan12/98percents_April_16/otu_table/Chapter1/no_bloom_bio_data_L3.txt")
rownames(bio)<-bio[,1]
bio[,1]<-NULL
bio_m<-as.matrix(bio)
a<-oecosimu(bio_m,bipartite::C.score,"swap")
but I keep having this error message:
Attaching package: 'bipartite'
The following object is masked from 'package:vegan':
nullmodel
Error in rowSums(x) : 'x' must be an array of at least two dimensions
Calls: oecosimu -> nullmodel -> rowSums
Execution halted
demo data:
Ciliophora Dinoflagellates MALVs Choanoflagellida
DNAHIN5m 0.062469804 0.826323018 0.031084701 0.000323747
DNAHIN35m 0.045216826 0.595750636 0.187010719 0.000917053
DNAHIN150m 0.018434224 0.204865296 0.531016032 0.017009618
DNAHIN240m 0.016211333 0.889640227 0.04191846 0.03087407
**first column first row is empty. First row are rownames
Anyone have encountered this problem yet?
Thanks!

Resources