MatLab - Creating an array of images depending on their correlation - arrays

I've created a program for a project that tests images against one another to see whether or not it's the same image or not. I've decided to use correlation since the images I am using are styled in the same way and with this, I've been able to get everything working up to this point.
I now wish to create an array of images again, but this time, in order of their correlation. So for example, if I'm testing a 50 pence coin and I test 50 images against the 50 pence coin, I want the highest 5 correlations to be stored into an array, which can then be used for later use. But I'm unsure how to do this as each item in the array will need to have more than one variable, which will be the image location/name of the image and it's correlation percentage.
%Program Created By Ben Parry, 2016.
clc(); %Simply clears the console window
%Targets the image the user picks
inputImage = imgetfile();
%Targets all the images inside this directory
referenceFolder = 'M:\Project\MatLab\Coin Image Processing\Saved_Images';
if ~isdir(referenceFolder)
errorMessage = print('Error: Folder does not exist!');
uiwait(warndlg(errorMessage)); %Displays an error if the folder doesn't exist
return;
end
filePattern = fullfile(referenceFolder, '*.jpg');
jpgFiles = dir(filePattern);
for i = 1:length(jpgFiles)
baseFileName = jpgFiles(i).name;
fullFileName = fullfile(referenceFolder, baseFileName);
fprintf(1, 'Reading %s\n', fullFileName);
imageArray = imread(fullFileName);
imshow(imageArray);
firstImage = imread(inputImage); %Reading the image
%Converting the images to Black & White
firstImageBW = im2bw(firstImage);
secondImageBW = im2bw(imageArray);
%Finding the correlation, then coverting it into a percentage
c = corr2(firstImageBW, secondImageBW);
corrValue = sprintf('%.0f%%',100*c);
%Custom messaging for the possible outcomes
corrMatch = sprintf('The images are the same (%s)',corrValue);
corrUnMatch = sprintf('The images are not the same (%s)',corrValue);
%Looping for the possible two outcomes
if c >=0.99 %Define a percentage for the correlation to reach
disp(' ');
disp('Images Tested:');
disp(inputImage);
disp(fullFileName);
disp (corrMatch);
disp(' ');
else
disp(' ');
disp('Images Tested:');
disp(inputImage);
disp(fullFileName);
disp(corrUnMatch);
disp(' ' );
end;
imageArray = imread(fullFileName);
imshow(imageArray);
end

You can use struct() function to create structures.
Initializing an array of struct:
imStruct = struct('fileName', '', 'image', [], 'correlation', 0);
imData = repmat(imStruct, length(jpgFiles), 1);
Setting field values:
for i = 1:length(jpgFiles)
% ...
imData(i).fileName = fullFileName;
imData(i).image = imageArray;
imData(i).correlation = corrValue;
end
Extract values of correlation field and select 5 highest correlations:
corrList = [imData.correlation];
[~, sortedInd] = sort(corrList, 'descend');
selectedData = imData(sortedInd(1:5));

Related

Interpolating GFS winds from isobaric to height coordinates using Metpy

I have been tasked with making plots of winds at various levels of the atmosphere to support aviation. While I have been able to make some nice plots using GFS model data (see code below), I'm really having to make a rough approximation of height using pressure coordinates available from the GFS. I'm using winds at 300 hPA, 700 hPA, and 925 hPA to make an approximation of the winds at 30,000 ft, 9000 ft, and 3000 ft. My question is really for those out there who are metpy gurus...is there a way that I can interpolate these winds to a height surface? It sure would be nice to get the actual winds at these height levels! Thanks for any light anyone can share on this subject!
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
import numpy as np
from netCDF4 import num2date
from datetime import datetime, timedelta
from siphon.catalog import TDSCatalog
from siphon.ncss import NCSS
from PIL import Image
from matplotlib import cm
# For the vertical levels we want to grab with our queries
# Levels need to be in Pa not hPa
Levels = [30000,70000,92500]
# Time deltas for days
Deltas = [1,2,3]
#Deltas = [1]
# Levels in hPa for the file names
LevelDict = {30000:'300', 70000:'700', 92500:'925'}
# The path to where our banners are stored
impath = 'C:\\Users\\shell\\Documents\\Python Scripts\\Banners\\'
# Final images saved here
imoutpath = 'C:\\Users\\shell\\Documents\\Python Scripts\\TVImages\\'
# Quick function for finding out which variable is the time variable in the
# netCDF files
def find_time_var(var, time_basename='time'):
for coord_name in var.coordinates.split():
if coord_name.startswith(time_basename):
return coord_name
raise ValueError('No time variable found for ' + var.name)
# Function to grab data at different levels from Siphon
def grabData(level):
query.var = set()
query.variables('u-component_of_wind_isobaric', 'v-component_of_wind_isobaric')
query.vertical_level(level)
data = ncss.get_data(query)
u_wind_var = data.variables['u-component_of_wind_isobaric']
v_wind_var = data.variables['v-component_of_wind_isobaric']
time_var = data.variables[find_time_var(u_wind_var)]
lat_var = data.variables['lat']
lon_var = data.variables['lon']
return u_wind_var, v_wind_var, time_var, lat_var, lon_var
# Construct a TDSCatalog instance pointing to the gfs dataset
best_gfs = TDSCatalog('http://thredds-jetstream.unidata.ucar.edu/thredds/catalog/grib/'
'NCEP/GFS/Global_0p5deg/catalog.xml')
# Pull out the dataset you want to use and look at the access URLs
best_ds = list(best_gfs.datasets.values())[1]
#print(best_ds.access_urls)
# Create NCSS object to access the NetcdfSubset
ncss = NCSS(best_ds.access_urls['NetcdfSubset'])
print(best_ds.access_urls['NetcdfSubset'])
# Looping through the forecast times
for delta in Deltas:
# Create lat/lon box and the time(s) for location you want to get data for
now = datetime.utcnow()
fcst = now + timedelta(days = delta)
timestamp = datetime.strftime(fcst, '%A')
query = ncss.query()
query.lonlat_box(north=78, south=45, east=-90, west=-220).time(fcst)
query.accept('netcdf4')
# Now looping through the levels to create our plots
for level in Levels:
u_wind_var, v_wind_var, time_var, lat_var, lon_var = grabData(level)
# Get actual data values and remove any size 1 dimensions
lat = lat_var[:].squeeze()
lon = lon_var[:].squeeze()
u_wind = u_wind_var[:].squeeze()
v_wind = v_wind_var[:].squeeze()
#converting to knots
u_windkt= u_wind*1.94384
v_windkt= v_wind*1.94384
wspd = np.sqrt(np.power(u_windkt,2)+np.power(v_windkt,2))
# Convert number of hours since the reference time into an actual date
time = num2date(time_var[:].squeeze(), time_var.units)
print (time)
# Combine 1D latitude and longitudes into a 2D grid of locations
lon_2d, lat_2d = np.meshgrid(lon, lat)
# Create new figure
#fig = plt.figure(figsize = (18,12))
fig = plt.figure()
fig.set_size_inches(26.67,15)
datacrs = ccrs.PlateCarree()
plotcrs = ccrs.LambertConformal(central_longitude=-150,
central_latitude=55,
standard_parallels=(30, 60))
# Add the map and set the extent
ax = plt.axes(projection=plotcrs)
ext = ax.set_extent([-195., -115., 50., 72.],datacrs)
ext2 = ax.set_aspect('auto')
ax.background_patch.set_fill(False)
# Add state boundaries to plot
ax.add_feature(cfeature.STATES, edgecolor='black', linewidth=2)
# Add geopolitical boundaries for map reference
ax.add_feature(cfeature.COASTLINE.with_scale('50m'))
ax.add_feature(cfeature.OCEAN.with_scale('50m'))
ax.add_feature(cfeature.LAND.with_scale('50m'),facecolor = '#cc9666', linewidth = 4)
if level == 30000:
spdrng_sped = np.arange(30, 190, 2)
windlvl = 'Jet_Stream'
elif level == 70000:
spdrng_sped = np.arange(20, 100, 1)
windlvl = '9000_Winds_Aloft'
elif level == 92500:
spdrng_sped = np.arange(20, 80, 1)
windlvl = '3000_Winds_Aloft'
else:
pass
top = cm.get_cmap('Greens')
middle = cm.get_cmap('YlOrRd')
bottom = cm.get_cmap('BuPu_r')
newcolors = np.vstack((top(np.linspace(0, 1, 128)),
middle(np.linspace(0, 1, 128))))
newcolors2 = np.vstack((newcolors,bottom(np.linspace(0,1,128))))
cmap = ListedColormap(newcolors2)
cf = ax.contourf(lon_2d, lat_2d, wspd, spdrng_sped, cmap=cmap,
transform=datacrs, extend = 'max', alpha=0.75)
cbar = plt.colorbar(cf, orientation='horizontal', pad=0, aspect=50,
drawedges = 'true')
cbar.ax.tick_params(labelsize=16)
wslice = slice(1, None, 4)
ax.quiver(lon_2d[wslice, wslice], lat_2d[wslice, wslice],
u_windkt[wslice, wslice], v_windkt[wslice, wslice], width=0.0015,
headlength=4, headwidth=3, angles='xy', color='black', transform = datacrs)
plt.savefig(imoutpath+'TV_UpperAir'+LevelDict[level]+'_'+timestamp+'.png',bbox_inches= 'tight')
# Now we use Pillow to overlay the banner with the appropriate day
background = Image.open(imoutpath+'TV_UpperAir'+LevelDict[level]+'_'+timestamp+'.png')
im = Image.open(impath+'Banner_'+windlvl+'_'+timestamp+'.png')
# resize the image
size = background.size
im = im.resize(size,Image.ANTIALIAS)
background.paste(im, (17, 8), im)
background.save(imoutpath+'TV_UpperAir'+LevelDict[level]+'_'+timestamp+'.png','PNG')
Thanks for the question! My approach here is for each separate column to interpolate the pressure coordinate of GFS-output Geopotential Height onto your provided altitudes to estimate the pressure of each height level for each column. Then I can use that pressure to interpolate the GFS-output u, v onto. The GFS-output GPH and winds have very slightly different pressure coordinates, which is why I interpolated twice. I performed the interpolation using MetPy's interpolate.log_interpolate_1d which performs a linear interpolation on the log of the inputs. Here is the code I used!
from datetime import datetime
import numpy as np
import metpy.calc as mpcalc
from metpy.units import units
from metpy.interpolate import log_interpolate_1d
from siphon.catalog import TDSCatalog
gfs_url = 'https://tds.scigw.unidata.ucar.edu/thredds/catalog/grib/NCEP/GFS/Global_0p5deg/catalog.xml'
cat = TDSCatalog(gfs_url)
now = datetime.utcnow()
# A shortcut to NCSS
ncss = cat.datasets['Best GFS Half Degree Forecast Time Series'].subset()
query = ncss.query()
query.var = set()
query.variables('u-component_of_wind_isobaric', 'v-component_of_wind_isobaric', 'Geopotential_height_isobaric')
query.lonlat_box(north=78, south=45, east=-90, west=-220)
query.time(now)
query.accept('netcdf4')
data = ncss.get_data(query)
# Reading in the u(isobaric), v(isobaric), isobaric vars and the GPH(isobaric6) and isobaric6 vars
# These are two slightly different vertical pressure coordinates.
# We will also assign units here, and this can allow us to go ahead and convert to knots
lat = units.Quantity(data.variables['lat'][:].squeeze(), units('degrees'))
lon = units.Quantity(data.variables['lon'][:].squeeze(), units('degrees'))
iso_wind = units.Quantity(data.variables['isobaric'][:].squeeze(), units('Pa'))
iso_gph = units.Quantity(data.variables['isobaric6'][:].squeeze(), units('Pa'))
u = units.Quantity(data.variables['u-component_of_wind_isobaric'][:].squeeze(), units('m/s')).to(units('knots'))
v = units.Quantity(data.variables['v-component_of_wind_isobaric'][:].squeeze(), units('m/s')).to(units('knots'))
gph = units.Quantity(data.variables['Geopotential_height_isobaric'][:].squeeze(), units('gpm'))
# Here we will select our altitudes to interpolate onto and convert them to geopotential meters
altitudes = ([30000., 9000., 3000.] * units('ft')).to(units('gpm'))
# Now we will interpolate the pressure coordinate for model output geopotential height to
# estimate the pressure level for our given altitudes at each grid point
pressures_of_alts = np.zeros((len(altitudes), len(lat), len(lon)))
for ilat in range(len(lat)):
for ilon in range(len(lon)):
pressures_of_alts[:, ilat, ilon] = log_interpolate_1d(altitudes,
gph[:, ilat, ilon],
iso_gph)
pressures_of_alts = pressures_of_alts * units('Pa')
# Similarly, we will use our interpolated pressures to interpolate
# our u and v winds across their given pressure coordinates.
# This will provide u, v at each of our interpolated pressure
# levels corresponding to our provided initial altitudes
u_at_levs = np.zeros((len(altitudes), len(lat), len(lon)))
v_at_levs = np.zeros((len(altitudes), len(lat), len(lon)))
for ilat in range(len(lat)):
for ilon in range(len(lon)):
u_at_levs[:, ilat, ilon], v_at_levs[:, ilat, ilon] = log_interpolate_1d(pressures_of_alts[:, ilat, ilon],
iso_wind,
u[:, ilat, ilon],
v[:, ilat, ilon])
u_at_levs = u_at_levs * units('knots')
v_at_levs = v_at_levs * units('knots')
# We can use mpcalc to calculate a wind speed array from these
wspd = mpcalc.wind_speed(u_at_levs, v_at_levs)
I was able to take my output from this and coerce it into your plotting code (with some unit stripping.)
Your 300-hPa GFS winds
My "30000-ft" GFS winds
Here is what my interpolated pressure fields at each estimated height level look like.
Hope this helps!
I am not sure if this is what you are looking for (I am very new to Metpy), but I have been using the metpy height_to_pressure_std(altitude) function. It puts it in units of hPa which then I convert to Pascals and then a unitless value to use in the Siphon vertical_level(float) function.
I don't think you can use metpy functions to convert height to pressure or vice versus in the upper atmosphere. There errors are too when using the Standard Atmosphere to convert say pressure to feet.

How to speed up iteration through array in ruby

I have multiple csv files that have the name and the price of products. There may be or may not be products that are in both files. I have to find the highest and the lowest price across these files for each product.
I joined products from both files into one array:
Dir["./*.csv"].each do |file|
CSV.foreach(file, headers:true) do |row|
tmpRow = row.to_s.chomp + "," + file #saving name of the input file
list.push(tmpRow.chomp.split(","))
end
end
The array list looks like this:
[["5893105","2.38", "weightOrSomethingIrrelevant", "./FIAT_2.csv"]]
This is the main algorithm:
while list[0] do
if list[0] != nil
tmpPart = list[0][0]
tmpParts = list.select{ |part, price| part == tmpPart}
tmpParts.each do |tp|
tmpPrices.push(tp[1])
end
list[0][2].to_f != 0.0 ? tmpWeight = list[0][2].to_s : tmpWeight = "Undefined"
tmpMaxPrice = tmpParts.select{|part, price| part == tmpPart && price == tmpPrices.max}
tmpMinPrice = tmpParts.select{|part, price| part == tmpPart && price == tmpPrices.min}
result.push([tmpPart, tmpWeight, tmpPrices.max, tmpMaxPrice[0].last, tmpPrices.min, tmpMinPrice[0].last)
tmpPart = ""
list = list - tmpParts
tmpParts = []
tmpPrices = []
tmpMaxPrice = []
tmpMinPrice = []
tmpWeight = ""
end
end
The input files are huge (over 200 000 rows), so I am having problems with efficiency of my algorithm (as it processes one row in half a second).
I am wondering if there is any better way to write this app.
I would split this into several parts:
1) I suggest you have a table which represents files (the file name, location, line number etc) and connected to that a product table (the row data from that file)
2) script / function to ingest files and store rows as DB records
3) script / function to analyse rows and find products by name, using the DB and pulling price info out using Min/max.
This could later be improved to deal with naming inconsistencies products vs product occurrences etc.

Scala read only certain parts of file

I'm trying to read an input file in Scala that I know the structure of, however I only need every 9th entry. So far I have managed to read the whole thing using:
val lines = sc.textFile("hdfs://moonshot-ha-nameservice/" + args(0))
val fields = lines.map(line => line.split(","))
The issue, this leaves me with an array that is huge (we're talking 20GB of data). Not only have I seen myself forced to write some very ugly code in order to convert between RDD[Array[String]] and Array[String] but it's essentially made my code useless.
I've tried different approaches and mixes between using
.map()
.flatMap() and
.reduceByKey()
however nothing actually put my collected "cells" into the format that I need them to be.
Here's what is supposed to happen: Reading a folder of text files from our server, the code should read each "line" of text in the format:
*---------*
| NASDAQ: |
*---------*
exchange, stock_symbol, date, stock_price_open, stock_price_high, stock_price_low, stock_price_close, stock_volume, stock_price_adj_close
and only keep a hold of the stock_symbol as that is the identifier I'm counting. So far my attempts have been to turn the entire thing into an array only collect every 9th index from the first one into a collected_cells var. Issue is, based on my calculations and real life results, that code would take 335 days to run (no joke).
Here's my current code for reference:
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
object SparkNum {
def main(args: Array[String]) {
// Do some Scala voodoo
val sc = new SparkContext(new SparkConf().setAppName("Spark Numerical"))
// Set input file as per HDFS structure + input args
val lines = sc.textFile("hdfs://moonshot-ha-nameservice/" + args(0))
val fields = lines.map(line => line.split(","))
var collected_cells:Array[String] = new Array[String](0)
//println("[MESSAGE] Length of CC: " + collected_cells.length)
val divider:Long = 9
val array_length = fields.count / divider
val casted_length = array_length.toInt
val indexedFields = fields.zipWithIndex
val indexKey = indexedFields.map{case (k,v) => (v,k)}
println("[MESSAGE] Number of lines: " + array_length)
println("[MESSAGE] Casted lenght of: " + casted_length)
for( i <- 1 to casted_length ) {
println("[URGENT DEBUG] Processin line " + i + " of " + casted_length)
var index = 9 * i - 8
println("[URGENT DEBUG] Index defined to be " + index)
collected_cells :+ indexKey.lookup(index)
}
println("[MESSAGE] collected_cells size: " + collected_cells.length)
val single_cells = collected_cells.flatMap(collected_cells => collected_cells);
val counted_cells = single_cells.map(cell => (cell, 1).reduceByKey{case (x, y) => x + y})
// val result = counted_cells.reduceByKey((a,b) => (a+b))
// val inmem = counted_cells.persist()
//
// // Collect driver into file to be put into user archive
// inmem.saveAsTextFile("path to server location")
// ==> Not necessary to save the result as processing time is recorded, not output
}
}
The bottom part is currently commented out as I tried to debug it, but it acts as pseudo-code for me to know what I need done. I may want to point out that I am next to not at all familiar with Scala and hence things like the _ notation confuse the life out of me.
Thanks for your time.
There are some concepts that need clarification in the question:
When we execute this code:
val lines = sc.textFile("hdfs://moonshot-ha-nameservice/" + args(0))
val fields = lines.map(line => line.split(","))
That does not result in a huge array of the size of the data. That expression represents a transformation of the base data. It can be further transformed until we reduce the data to the information set we desire.
In this case, we want the stock_symbol field of a record encoded a csv:
exchange, stock_symbol, date, stock_price_open, stock_price_high, stock_price_low, stock_price_close, stock_volume, stock_price_adj_close
I'm also going to assume that the data file contains a banner like this:
*---------*
| NASDAQ: |
*---------*
The first thing we're going to do is to remove anything that looks like this banner. In fact, I'm going to assume that the first field is the name of a stock exchange that start with an alphanumeric character. We will do this before we do any splitting, resulting in:
val lines = sc.textFile("hdfs://moonshot-ha-nameservice/" + args(0))
val validLines = lines.filter(line => !line.isEmpty && line.head.isLetter)
val fields = validLines.map(line => line.split(","))
It helps to write the types of the variables, to have peace of mind that we have the data types that we expect. As we progress in our Scala skills that might become less important. Let's rewrite the expression above with types:
val lines: RDD[String] = sc.textFile("hdfs://moonshot-ha-nameservice/" + args(0))
val validLines: RDD[String] = lines.filter(line => !line.isEmpty && line.head.isLetter)
val fields: RDD[Array[String]] = validLines.map(line => line.split(","))
We are interested in the stock_symbol field, which positionally is the element #1 in a 0-based array:
val stockSymbols:RDD[String] = fields.map(record => record(1))
If we want to count the symbols, all that's left is to issue a count:
val totalSymbolCount = stockSymbols.count()
That's not very helpful because we have one entry for every record. Slightly more interesting questions would be:
How many different stock symbols we have?
val uniqueStockSymbols = stockSymbols.distinct.count()
How many records for each symbol do we have?
val countBySymbol = stockSymbols.map(s => (s,1)).reduceByKey(_+_)
In Spark 2.0, CSV support for Dataframes and Datasets is available out of the box
Given that our data does not have a header row with the field names (what's usual in large datasets), we will need to provide the column names:
val stockDF = sparkSession.read.csv("/tmp/quotes_clean.csv").toDF("exchange", "symbol", "date", "open", "close", "volume", "price")
We can answer our questions very easy now:
val uniqueSymbols = stockDF.select("symbol").distinct().count
val recordsPerSymbol = stockDF.groupBy($"symbol").agg(count($"symbol"))

Conversion of MetaTrader4 to NinjaTrader

I am trying to write an indicator originally from MT4 into NT7.
I have the following calculations in MT4:
dayi = iBarShift(Symbol(), myPeriod, Time[i], false);
Q = (iHigh(Symbol(), myPeriod,dayi+1) - iLow(Symbol(),myPeriod,dayi+1));
L = iLow(NULL,myPeriod,dayi+1);
H = iHigh(NULL,myPeriod,dayi+1);
O = iOpen(NULL,myPeriod,dayi+1);
C = iClose(NULL,myPeriod,dayi+1);
myperiod is a variable where I place the period in minutes (1440 = 1day).
What are the equivalent functions in NT7 to iBarShift, iHigh and so on?
Thanks in advance
For NinjaTrader:
iLow = Low or Lows for multi-time frame
iHigh = High or Highs
iOpen = Open or Opens
iClose = Close or Closes
So an example would be
double low = Low[0]; // Gets the low of the bar at index 0, or the last fully formed bar (If CalculateOnBarClose = true)
In order to make sure you are working on the 1440 minute time frame, you will need to add the following in the Initialize() method:
Add(PeriodType.Minute, 1440);
If there are no Add statements prior to this one, it will place it at index 1 (O being the chart default index) in a 2 dimensional array. So to access the low of the 1440 minute bar at index 0 would be:
double low = Lows[1][0];
For iBarShift look at
int barIndex = Bars.GetBar(time);
which will give you the index of the bar with the matching time. If you need to use this function on the 1440 bars (or other ones), use the BarsArray property to access the correct Bar object and then use the GetBar method on it. For example:
int barIndex = BarsArray[1].GetBar(time);
Hope that helps.

Getting current position of one of the multiple objects in a figure?

I wrote a script that returns several text boxes in a figure. The text boxes are moveable (I can drag and drop them), and their positions are predetermined by the data in an input matrix (the data from the input matrix is applied to the respective positions of the boxes by nested for loop). I want to create a matrix which is initially a copy of the input matrix, but is UPDATED as I change the positions of the boxes by dragging them around. How would I update their positions? Here's the entire script
function drag_drop=drag_drop(tsinput,infoinput)
[x,~]=size(tsinput);
dragging = [];
orPos = [];
fig = figure('Name','Docker Tool','WindowButtonUpFcn',#dropObject,...
'units','centimeters','WindowButtonMotionFcn',#moveObject,...
'OuterPosition',[0 0 25 30]);
% Setting variables to zero for the loop
plat_qty=0;
time_qty=0;
k=0;
a=0;
% Start loop
z=1:2
for idx=1:x
if tsinput(idx,4)==1
color='red';
else
color='blue';
end
a=tsinput(idx,z);
b=a/100;
c=floor(b); % hours
d=c*100;
e=a-d; % minutes
time=c*60+e; % time quantity to be used in 'position'
time_qty=time/15;
plat_qty=tsinput(idx,3)*2;
box=annotation('textbox','units','centimeters','position',...
[time_qty plat_qty 1.5 1.5],'String',infoinput(idx,z),...
'ButtonDownFcn',#dragObject,'BackgroundColor',color);
% need to new=get(box,'Position'), fill out matrix OUT of loop
end
fillmenu=uicontextmenu;
hcb1 = 'set(gco, ''BackgroundColor'', ''red'')';
hcb2 = 'set(gco, ''BackgroundColor'', ''blue'')';
item1 = uimenu(fillmenu, 'Label', 'Train Full', 'Callback', hcb1);
item2 = uimenu(fillmenu, 'Label', 'Train Empty', 'Callback', hcb2);
hbox=findall(fig,'Type','hggroup');
for jdx=1:x
set(hbox(jdx),'uicontextmenu',fillmenu);
end
end
new_arr=tsinput;
function dragObject(hObject,eventdata)
dragging = hObject;
orPos = get(gcf,'CurrentPoint');
end
function dropObject(hObject,eventdata,box)
if ~isempty(dragging)
newPos = get(gcf,'CurrentPoint');
posDiff = newPos - orPos;
set(dragging,'Position',get(dragging,'Position') + ...
[posDiff(1:2) 0 0]);
dragging = [];
end
end
function moveObject(hObject,eventdata)
if ~isempty(dragging)
newPos = get(gcf,'CurrentPoint');
posDiff = newPos - orPos;
orPos = newPos;
set(dragging,'Position',get(dragging,'Position') + [posDiff(1:2) 0 0]);
end
end
end
% Testing purpose input matrices:
% tsinput=[0345 0405 1 1 ; 0230 0300 2 0; 0540 0635 3 1; 0745 0800 4 1]
% infoinput={'AJ35 NOT' 'KL21 MAN' 'XPRES'; 'ZW31 MAN' 'KM37 NEW' 'VISTA';
% 'BC38 BIR' 'QU54 LON' 'XPRES'; 'XZ89 LEC' 'DE34 MSF' 'DERP'}
If I understand you correctly (and please post some code if I'm not), then all you need is indeed a set/get combination.
If boxHandle is a handle to the text-box object, then you get its current position by:
pos = get (boxHandle, 'position')
where pos is the output array of [x, y, width, height].
In order to set to a new position, you use:
set (boxHandle, 'position', newPos)
where newPos is the array of desired position (with the same structure as pos).
EDIT
Regarding to updating your matrix, since you have the handle of the object you move, you actually DO have access to the specific text box.
When you create each text box, set a property called 'UserData' with the associated indices of tsinput used for that box. In your nested for loop add this
set (box, 'UserData', [idx, z]);
after the box is created, and in your moveObject callback get the data by
udata = get(dragging,'UserData');
Then udata contains the indices of the elements you want to update.

Resources