TF 2 - Keras - Merging two seperately trained models to a new ensemble model - artificial-intelligence

Newbie here ;)
I need your help ;)
I have following problem:
I want to merge two TF 2.0 - Keras Models into a new Model.
Expect for example a ResNet50V2 without the head, and retrained on new data - saved weights are provided and loaded:
resnet50v2 = ResNet50V2(weights='imagenet', include_top=False)
#model.summary()
last_layer = resnet50v2.output
x = GlobalAveragePooling2D()(last_layer)
x = Dense(768, activation='relu', name='img_dense_768')(x)
out = Dense(NUM_CLASS, activation='softmax', name='img_output_layer')(x)
resnet50v2 = Model(inputs=resnet50v2.input, outputs=out)
resnet50v2.load_weights(pathToImageModel)
Expect for example a Bert Network retrained on new data - saved weights are provided, in the same way as shown before.
Now I want to skip the last softmax layer from the two models and "merge" them into a new Model.
So I took the layer before the last layer, which names I know:
model_image_output = model_image.get_layer('img_dense_768').output
model_bert_output = model_bert.get_layer('bert_output_layer_768').output
I built two new models with these informations:
model_img = Model(model_image.inputs, model_image_output)
model_bert = Model(model_bert.inputs, model_bert_output)
So now I want to concatenate the outputs of the two models into a new one and add a new "head"
concatenate = Concatenate()([model_img.output,model_bert.output])
x = Dense(512, activation = 'relu')(concatenate)
x = Dropout(0.4)(x)
output = Dense(NUM_CLASS, activation='softmax', name='final_multimodal_output')(x)
model_concat = Model([model_image.input, model_bert.input])
So far so good, model code seemed to be valid but then my knowledge ends.
The main questions are:
Also if the last softmax layers are skipped, the loaded weights should be still available or ?
The new concatened model wants to be build, ok but this ends in this error message, which I only partially understand:
NotImplementedError: When subclassing the Model class, you should implement a call method.
Is there another way to create for example the whole ensemble network and load only the pretrained parts of it and leave the rest beeing trainable?
Am I missing something? I never did subclassing the model class? If so, it was not intended XD
May I ask kindly for some hints ?
Thanks in advance!!
Kind regards ragitagha
UPDATE:
So to find my mistake more quickly and provide the solution in a more precise way:
I had to change the lines
concatenate = Concatenate()([model_img.output,model_bert.output])
x = Dense(512, activation = 'relu')(concatenate)
x = Dropout(0.4)(x)
output = Dense(NUM_CLASS, activation='softmax', name='final_multimodal_output')(x)
model_concat = Model([model_image.input, model_bert.input])
to:
concatenate = Concatenate()([model_img.output,model_bert.output])
x = Dense(512, activation = 'relu')(concatenate)
x = Dropout(0.4)(x)
output = Dense(NUM_CLASS, activation='softmax', name='final_multimodal_output')(x)
model_concat = Model([model_image.input, model_bert.input], output)

Related

How to assign multiple variables in a loop for graphs in Python 3

I am relatively new to coding and I have a few issues I don't quite understand how to solve, yet. I'm trying to build code that will make graphs that will produce from a ticker list, with the data downloading from yahoo finance. Taking out of account manually assigning stock1 (and so forth) a ticker for a moment...
I want to figure out how to loop the data going into running the graph, so TSLA and MSFT in my code. So far I have the code below, which I already changed dfs and stocks. I just don't understand how to make the loop. If anyone has some good resources for loops, as well, let me know.
Later, I would like to save the graphs as a png with file names corresponding to the stock being pulled from yahoo, so extra points if someone knows how to change this code (savefig = dict(fname="tsla.png", bbox_inches= "tight") which goes after style = 'default'. Thanks for the help!
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import datetime as dt
import mplfinance as mpf
import yfinance as yf
#yahoo info
start = "2020-01-01"
end = dt.datetime.now()
stock1 = 'TSLA'
stock2 = 'MSFT'
df1 = yf.download(stock1, start, end)
df2 = yf.download(stock2, start, end)
stocks = [[stock1],[stock2]]
dfs = [[df1],[df2]]
changingvars = [[stocks],[dfs]]
#graph1
short_sma = 20
long_sma = 50
SMAs = [short_sma, long_sma]
for i in SMAs:
dfs["SMA_"+ str(i)] = dfs.iloc[:,4].rolling(window=i).mean()
graph1 = mpf.plot(dfs, type = 'candlestick',figratio=(16,6),
mav=(short_sma,long_sma),
volume=True, title= str(stocks),
style='default')
plt.show()
Not sure why you are calculating your own SMA's, and grouping your stocks and dataframes, if your goal is only to create multiple plots (one for each stock). Also, if you are using mplfinance, then there is no need to import and/or use matplotlib.pyplot (nor to call plt.show(); mplfinance does that for you).
That said, here is a suggestion for your code. I've added tickers for Apple and Alphabet (Google), just to demonstrate how this can be extended.
stocklist = ['TSLA','MSFT','AAPL','GOOGL']
start = "2020-01-01"
end = dt.datetime.now()
short_sma = 20
long_sma = 50
for stock in stocklist:
df = yf.download(stock, start, end)
filename = stock.lower() + '.png'
mpf.plot(df,type='candlestick',figratio=(16,6),
mav=(short_sma,long_sma),
volume=True, title=stock,style='default',
savefig=dict(fname=filename,bbox_inches="tight")
)
The above code will not display the plots for each stock, but will save each one in its own .png file locally (where you run the script) for you to view afterwards.
Note also that it does not save the actual data; only plots the data and then moves on to the next stock, reassigning the dataframe variable (which automatically deletes the previous stock's data). If you want to save the data for each stock in a separate csv file, that is easy to do as well with Pandas' .to_csv() method.
Also, I am assuming you are calling yf.download() correctly. I am not familiar with that API so I just left that part of the code as you had it.
HTH. Let me know. --Daniel

Keras Multiple inputs - Expected to see 2 array(s), but instead got the following list of 1 arrays:

Following is the code to create model. Model has 2 input layers, 1 embedding, LSTM, attention, and dense layer. I am getting an error (image attached) when I am trying to execute model.fit with multiple inputs.
Not sure why? Please explain.
MAX_SEQUENCE_LENGTH = 20
# First input layer
sequence_ip = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32')
# Second input layer
time_Decay_ip = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='float32')
# Adding embedding layer
embedding_layer = Embedding(vocab_length, output_dim = 32, input_length=seq_length, trainable=True)
embedded_sequences = embedding_layer(sequence_ip)
l_gru = LSTM(100, return_sequences=True)(embedded_sequences)
l_att = attention()([l_gru, time_Decay_ip])
preds = Dense(1, activation='softmax', trainable = True)(l_att)
model = Model([sequence_ip, time_Decay_ip], preds)
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['acc'])
model.summary()
model.fit(x = [np.array(X_train), np.array(time_decay_tr)], y = np.array(Y_train), validation_data=(X_test, Y_test), nb_epoch=10, batch_size=9)
I had this error when using keras, there are multiple ways to fix this:
You can either uninstall and reinstall the data you are using,
Or you can uninstall and reinstall tensorflow.
You also need to check if your GPU is running trying to run CUDA (fron NVIDIA cards) if it is and you dont have a NVIDIA GPU use the CPU

Why do I have problem when I train my model in pytorch?

I'm new with PyTorch and AI but I have some trouble when I try to train my model.
I just create my Dataset and my Dataloader
train_dataset = TensorDataset(tensor_train,tensor_label)
train_dataloader = DataLoader(train_dataset,batch_size=32,shuffle=True)
And after this my criterion and optimiser
criterion = nn.CrossEntropyLoss()
optimiser=optim.Adam(net.parameters(),lr=0.2)
And I try to train it with
for epoch in range(10):
for data in train_dataloader:
inputs,labels = data
output = net(torch.Tensor(inputs))
loss = criterion(output,labels.to(device))
optimiser.zero_grad()
loss.backward()
optimiser.step()
But I got this error
d:\py\lib\site-packages\torch\nn\modules\module.py in <lambda>(t)
321 Module: self
322 """
--> 323 return self._apply(lambda t: t.type(dst_type))
324
325 def float(self):
TypeError: dtype must be a type, str, or dtype object
I will be happy if someone finds the problem, thanks.
I see two possible problems:
1) Your dataloader outputs a tensor, so you don't need to create another tensor. Just do this:
output = net(inputs)
2) Are you sending your model to device? If yes, you need to send the inputs as well. If not, you don't need to do this with the outputs:
loss = criterion(output,labels)
However, I'm not sure if the error you're getting is not related to these 2 points. Consider posting the line in your code (instead of the lib). Also, consider including more information about tensor_train and tensor_label
Ty for reply but the problem was coming from another thing
I was creating my model like this
class Perceptron(nn.Module):
def __init__(self):
super(Perceptron,self).__init__()
self.type = nn.Linear(4,3)
def forward(self,x):
return self.type(x)
net = Perceptron().to(device)
and the nn.Module was already getting a type attribute thats why i was getting this error( i thing) , then i solve by change self.type by self.anythingElseThanType

Need to figure out how to use DeepZoomTools.dll to create DZI

I am not familiar with .NET coding.
However, I must create DZI sliced image assets on a shared server and am told that I can instantiate and use DeepZoomTools.dll.
Can someone show me a very simple DZI creation script that demonstrates the proper .NET coding technique? I can embellish as needed, I'm sure, but don't know where to start.
Assuming I have a jpg, how does a script simply slice it up and save it?
I can imagine it's only a few lines of code. The server is running IIS 7.5.
If anyone has a simple example, I'd be most appreciative.
Thanks
I don't know myself, but you might ask in the OpenSeadragon community:
https://github.com/openseadragon/openseadragon/issues
Someone there might know.
Does it have to be DeepZoomTools.dll? There are a number of other options for creating DZI files. Here are a few:
http://openseadragon.github.io/examples/creating-zooming-images/
Example of building a Seadragon Image from multiple images.
In this, the "clsCanvas" objects and collection can pretty much be ignored, it was an object internal to my code that was generating the images with GDI+, then putting them on disk. The code below just shows how to get a bunch of images from file and assemble them into a zoomable collection. Hope this helps someone :-).
CollectionCreator cc = new CollectionCreator();
// set default values that make sense for conversion options
cc.ServerFormat = ServerFormats.Default;
cc.TileFormat = ImageFormat.Jpg;
cc.TileSize = 256;
cc.ImageQuality = 0.92;
cc.TileOverlap = 0;
// the max level should always correspond to the log base 2 of the tilesize, unless otherwise specified
cc.MaxLevel = (int)Math.Log(cc.TileSize, 2);
List<Microsoft.DeepZoomTools.Image> aoImages = new List<Microsoft.DeepZoomTools.Image>();
double fLeftShift = 0;
foreach (clsCanvas oCanvas in aoCanvases)
{
//viewport width as a function of this canvas, so the width of this canvas is 1
double fThisImgWidth = oCanvas.MyImageWidth - 1; //the -1 creates a 1px overlap, hides the seam between images.
double fTotalViewportWidth = fTotalImageWidth / fThisImgWidth;
double fMyLeftEdgeInViewportUnits = -fLeftShift / fThisImgWidth; ; //please don't ask me why this is a negative numeber
double fMyTopInViewportUnits = -fTotalViewportWidth * 0.3;
fLeftShift += fThisImgWidth;
Microsoft.DeepZoomTools.Image oImg = new Microsoft.DeepZoomTools.Image(oCanvas.MyFileName.Replace("_Out_Tile",""));
oImg.ViewportWidth = fTotalViewportWidth;
oImg.ViewportOrigin = new System.Windows.Point(fMyLeftEdgeInViewportUnits, fMyTopInViewportUnits);
aoImages.Add(oImg);
}
// create a list of all the images to include in the collection
cc.Create(aoImages, sMasterOutFile);

LDAPMAP - Mapping SAP data to LDAP via RSLDAPSYNC_USER function

We are looking at syncing some of our LDAP (Active Directory) data with what is stored in SAP. SAP provides several function modules that allow you to write a custom program to handle mapping the data, but we are looking to use the provided solution that makes use of RSLDAPSYNC_USER.
The issue I'm having is understanding how the mapping of fields is performed in LDAPMAP. In particular, when performing the Mapping Overview, where are the structures as shown below defined?
Also, we have a function module that is currently available for grabbing all of the fields we would like to send to LDAP, but can the screen shown below be used to call a custom function module to grab the data I require? If so, then please give an example.
Thanks,
Mike
I am not sure if that is what you ask. As an answer to your second question:
You can give attributes that you want to get. The LDAP_READ function will return the results in entries parameter.
CALL FUNCTION 'LDAP_READ'
EXPORTING
base = base
* scope = 2
filter = filter
* attributes = attributes_ldap
timeout = s_timeout
attributes = t_attributes_ldap
IMPORTING
entries = t_entries_ldap "<< entries will come
EXCEPTIONS
no_authoriz = 1
conn_outdate = 2
ldap_failure = 3
not_alive = 4
other_error = 5
OTHERS = 6.
Entries parameter looks like:
Attributes parameter looks like:

Resources