Opening the Indicator in Multiple Charts - indicator

I found an interesting MT4 indicator which shows remaining time of the next bar in the chart. But that works for the particular pairs which I would choose.
But I want that program to work/applied on all opened pairs by just applying to any of the opened one pair.
Please check the code below::::
//--- input parameters
input string LabelFont = "Arial";
input int LabelSize = 15;
input color LabelColor = clrRed;
input int LabelDistance = 15;
const string LabelName = "TimeToNextCandle";
//+------------------------------------------------------------------+
//| Custom indicator initialization function |
//+------------------------------------------------------------------+
int OnInit()
{
//---
EventSetTimer(1);
return(INIT_SUCCEEDED);
}
//+------------------------------------------------------------------+
//| Custom indicator deinitialization function |
//+------------------------------------------------------------------+
void OnDeinit(const int reason)
{
//---
EventKillTimer();
ObjectDelete(0, LabelName);
}
//+------------------------------------------------------------------+
//| Custom indicator iteration function |
//+------------------------------------------------------------------+
int OnCalculate(const int rates_total,
const int prev_calculated,
const datetime &time[],
const double &open[],
const double &high[],
const double &low[],
const double &close[],
const long &tick_volume[],
const long &volume[],
const int &spread[])
{
//---
CalcTime();
//--- return value of prev_calculated for next call
return(rates_total);
}
//+------------------------------------------------------------------+
//| Timer function |
//+------------------------------------------------------------------+
void OnTimer()
{
//---
CalcTime();
}
//+------------------------------------------------------------------+
void CalcTime(void)
{
// checking is there output label. create it if necessary
if (ObjectFind(LabelName) == -1)
{
ObjectCreate(0, LabelName, OBJ_LABEL, 0, 0, 0);
ObjectSetString(0, LabelName, OBJPROP_FONT, LabelFont);
ObjectSetInteger(0, LabelName, OBJPROP_FONTSIZE, LabelDistance);
ObjectSetInteger(0, LabelName, OBJPROP_COLOR, LabelColor);
ObjectSetInteger(0, LabelName, OBJPROP_ANCHOR, ANCHOR_RIGHT_LOWER);
ObjectSetInteger(0, LabelName, OBJPROP_CORNER, CORNER_RIGHT_LOWER);
ObjectSetInteger(0, LabelName, OBJPROP_XDISTANCE, LabelDistance);
ObjectSetInteger(0, LabelName, OBJPROP_YDISTANCE, LabelDistance);
}
// calculating remaining time to next candle
datetime TimeTo = PeriodSeconds() - (TimeCurrent() - Time[0]);
// assembling the output string depending on current period on the chart
string Out = StringFormat("%.2d", TimeSeconds(TimeTo));
if (TimeTo >= 3600)
{
Out = StringFormat("%.2d:%s", TimeMinute(TimeTo), Out);
if (TimeTo >= 86400)
Out = StringFormat("%d day(s) %.2d:%s", int(TimeTo / 86400), TimeHour(TimeTo), Out);
else
Out = StringFormat("%d:%s", TimeHour(TimeTo), Out);
}
else
Out = StringFormat("%d:%s", TimeMinute(TimeTo), Out);
ObjectSetString(0, LabelName, OBJPROP_TEXT, StringFormat("%s (%.0f%s)", Out, 100.0 / PeriodSeconds() * TimeTo, "%"));
}
//+------------------------------------------------------------------+

Let's split the task into two parts:Part I.:How to measure the time.
Metatrader Terminal architecture has moved the code-execution units into three principal types:
- {0|1}-unique ExpertAdvisor-type of MQL4-code, per MT4.graph
- {0|1|..|n} CustomIndicatero-type of MQL4-code, per MT4.graph
- {0|1}-unique Script-type of MQL4-code, per MT4.graph
This may sound interesting to use more and more CustomIndicators, but there is a catch.
Catch XXII.
All CustomIndicators and all ExpertAdvisors are responsively executed upon a Market Event message arrival. That means, whenever a Market Event arrives, all ExpertAdvisors and all CustomIndicators are pushed to execute their respective processing ( OnTick(){...} function is called in ExpertAdvisor(s), OnCalculate(){...} function is called in CustomIndicator ). That still sounds reasonable. The Catch XXII. is hidden in the fact, that all CustomIndicators share a SINGLE THREAD. Yes, ALL !
This translated into plain english means, avoid everything that could block from CustomIndicators and put it elsewhere, but the CustomIndicator MQL4-code.
Using New-MQL4.56789 EventSetTimer()-facilities does not improve the already obvious problem, just the very opposite, so let's forget to use OnTimer().
So, how to safely design such feature?
The possible way is to use a low-profile service design, deployed in a helper-( non-trading )-MT4.graph for such utility service and best use a Script ( as it is absolutely under one's control, asynchronous to any Event-triggered EA / CI code-execution and works safe and sound even during times, when the Market is closed or the connection to the MetaTrader Server is down ).
This "central"-clock ( timing facility ) provides a unique, reference-time and stores it so as to be available to any "consumer" in other MT4.graphs.
// MQL4-Script
double assembleGV( int aTimeNOW,
int aPERIOD
){
#define BrokerServerGMToffset 0 // Yes, Real World isn't perfect :o)
int aTimeToEoB = PeriodSeconds( aPERIOD ) - ( aTimeNOW - ( iTime( _Symbol, aPERIOD, 0 ) - BrokerServerGMToffset ) );
return( aTimeToEoB + aTimeToEoB / PeriodSeconds( aPERIOD ) ); // INT + FRAC ( non-%, but having the value of the remainder )
}
while True{
Sleep( 250 ); int TimeNow = TimeGMT();
// -------------------------------------- COOL DOWN CPU-CORE(s)
GlobalVariableSet( "msMOD_TimeSERVICE_4_PERIOD=60", assembleGV( TimeNow, PERIOD_M1 ) ); // for PERIOD_M1
GlobalVariableSet( "msMOD_TimeSERVICE_4_PERIOD=300", assembleGV( TimeNow, PERIOD_M5 ) ); // for PERIOD_M5
GlobalVariableSet( "msMOD_TimeSERVICE_4_PERIOD=900", assembleGV( TimeNow, PERIOD_M15 ) ); // for PERIOD_M15
GlobalVariableSet( "msMOD_TimeSERVICE_4_PERIOD=1800", assembleGV( TimeNow, PERIOD_M30 ) ); // for PERIOD_M30
GlobalVariableSet( "msMOD_TimeSERVICE_4_PERIOD=3600", assembleGV( TimeNow, PERIOD_H1 ) ); // for PERIOD_H1
GlobalVariableSet( "msMOD_TimeSERVICE_4_PERIOD=14400", assembleGV( TimeNow, PERIOD_H4 ) ); // for PERIOD_H4
GlobalVariableSet( "msMOD_TimeSERVICE_4_PERIOD=86400", assembleGV( TimeNow, PERIOD_D1 ) ); // for PERIOD_D1
GlobalVariableSet( "msMOD_TimeSERVICE_4_PERIOD=604800", assembleGV( TimeNow, PERIOD_W1 ) ); // for PERIOD_W1
GlobalVariableSet( "msMOD_TimeSERVICE_4_PERIOD=2592000", assembleGV( TimeNow, PERIOD_MN1 ) ); // for PERIOD_MN1
GlobalVariableSet( "msMOD_TimeSERVICE_4_PERIOD=120", assembleGV( TimeNow, PERIOD_M2 ) ); // for PERIOD_M2
GlobalVariableSet( "msMOD_TimeSERVICE_4_PERIOD=180", assembleGV( TimeNow, PERIOD_M3 ) ); // for PERIOD_M3
GlobalVariableSet( "msMOD_TimeSERVICE_4_PERIOD=240", assembleGV( TimeNow, PERIOD_M4 ) ); // for PERIOD_M4
GlobalVariableSet( "msMOD_TimeSERVICE_4_PERIOD=360", assembleGV( TimeNow, PERIOD_M6 ) ); // for PERIOD_M6
GlobalVariableSet( "msMOD_TimeSERVICE_4_PERIOD=600", assembleGV( TimeNow, PERIOD_M10 ) ); // for PERIOD_M10
GlobalVariableSet( "msMOD_TimeSERVICE_4_PERIOD=720", assembleGV( TimeNow, PERIOD_M12 ) ); // for PERIOD_M12
GlobalVariableSet( "msMOD_TimeSERVICE_4_PERIOD=1200", assembleGV( TimeNow, PERIOD_M20 ) ); // for PERIOD_M20
GlobalVariableSet( "msMOD_TimeSERVICE_4_PERIOD=7200", assembleGV( TimeNow, PERIOD_H2 ) ); // for PERIOD_H2
GlobalVariableSet( "msMOD_TimeSERVICE_4_PERIOD=10800", assembleGV( TimeNow, PERIOD_H3 ) ); // for PERIOD_H3
GlobalVariableSet( "msMOD_TimeSERVICE_4_PERIOD=21600", assembleGV( TimeNow, PERIOD_H6 ) ); // for PERIOD_H6
GlobalVariableSet( "msMOD_TimeSERVICE_4_PERIOD=28800", assembleGV( TimeNow, PERIOD_H8 ) ); // for PERIOD_H8
GlobalVariableSet( "msMOD_TimeSERVICE_4_PERIOD=43200", assembleGV( TimeNow, PERIOD_H12 ) ); // for PERIOD_H12
}
Part II.:How to display the time in all MT4.graphsin a most lightweight form?
Given the central service takes care of respective ENUM_TIMEFRAMES listed ( being "visited" or not by other MT4.graphs ) and posting the pre-fabbed results of such "time-service" calculus, any potential "consumer"may just asynchronously check and react to the "time-service" posted values,in places, where feasible and safe in it's respective flow of code-execution, kind users are under full control to declare, where such lightweight GUI-call may appear and where all updates may benefit from ( a potentially deferred ) enforced GUI-repaint:
/* MQL4-ExpertAdvisor
or
other Script
or even
a lightweight add-on to a CustomIndicator */
#define LabelNAME "msMOD_StackOverflowDEMO"
bool aDeferred_GUI_REPAINT = False;
.
..
RepaintTIME();
..
.
if ( aDeferred_GUI_REPAINT ){
aDeferred_GUI_REPAINT = False;
WindowRedraw();
}
void RepaintTIME(){
static string aGlobalVariableNAME = "msMODmsMOD_TimeSERVICE_4_PERIOD=" + (string)PeriodSeconds();
ObjectSetString( 0,
LabelNAME, // external responsibility to set/create
OBJPROP_TEXT,
StringFormat( "%d (%.0f %%)",
int( GlobalVariableGet( aGlobalVariableNAME ) ), // SECONDS TILL EoB
( GlobalVariableGet( aGlobalVariableNAME ) % 1 ) * 100.0 ) // PER CENTO [%] EXPRESSED REMAINDER
);
aDeferred_GUI_REPAINT = True;
}

Related

Turf returns wrong (?) bearing and distance

In my react app I have this piece of code:
import * as turfBearing from '#turf/bearing'
import * as turfDistance from '#turf/distance'
distance( p1, p2 ) {
return Math.round( turfDistance.default( p1, p2 ) * 1000 )
}
bearing( p1, p2 ) {
return ( Math.round( turfBearing.default( p1, p2 ) ) + 360 ) % 360
}
Given the data:
const p1 = [ 48.1039072, 11.6558318 ]
const p2 = [ 48.1035817, 11.6555873 ]
The results are:
bearing = 233, dist = 45
If I feed the same data to an online service like https://www.omnicalculator.com/other/azimuth it gives results:
which are considerably different from turf's.
Is this turf's problem or online calculator's?
Indeed, Turfjs has some problems in calc. I swapped it for a randomly picked library "sgeo": "^0.0.6", and the code:
distance( p1, p2 ) {
return Math.round( new sgeo.latlon( ...p1 ).distanceTo( new sgeo.latlon( ...p2 ) ) * 1000 )
}
bearing( p1, p2 ) {
return Math.round( new sgeo.latlon( ...p1 ).bearingTo( new sgeo.latlon( ...p2 ) ) )
}
produces relevant results:
bearing = 206 dist = 40
Turf expects data in (longitude, latitude) order per the GeoJSON standard, see https://github.com/Turfjs/turf#data-in-turf
In your case input data should be:
const p1 = [ 11.6558318, 48.1039072 ]
const p2 = [ 11.6555873, 48.1035817 ]

Using XmlWriter to create large document from LINQ to SQL / LINQPad throws Out of Memory Exception

I'm trying to export data in a LINQPad script and keep receiving Out of Memory exception. I feel like the script is doing all 'streamable' actions so not sure why I'm getting this.
The main loop of the code looks like the following. A few notes:
1) The first query returns around 60K rows profileDB.Profiles.Where(p => p.Group.gName == groupName).Select( d => d.pAuthID )
2) The second query for each pAuthID returns rows in a the database where one field is a Xml blob of data stored in a string field. It is not that big...< 500K for sure. Each pAuthID row could have as many as 50 rows of FolderItems. The query is profileDB.FolderItems.Where(f => f.Profile.pAuthID == p && ( folderTypes[0] == "*" || folderTypes.Contains(f.fiEntryType) ) ).OrderBy(f => f.fiEntryDate)
3) I only write a single line to the result pane when the processing starts.
4) The script runs for a long time, throwing exception when the output file is around 600-700MB. Huge I know, but it is a requirement that we dump out all the data into Xml.
5) The WriteFolderItems function/loop will be pasted below the main loop.
6) I call XmlWriter.Flush after each xDataDef element.
using (var xw = XmlWriter.Create(fileName, new XmlWriterSettings { Indent = false } ) )
{
xw.WriteStartElement( "xDataDefs" );
foreach( var p in profileDB.Profiles.Where(p => p.Group.gName == groupName).Select( d => d.pAuthID ) )
{
if ( totalRows == 0 ) // first one...
{
string.Format( "Writing results to {0}...", fileName ).Dump( "Progress" );
}
totalRows++;
var folderItems = profileDB.FolderItems.Where(f => f.Profile.pAuthID == p && ( folderTypes[0] == "*" || folderTypes.Contains(f.fiEntryType) ) ).OrderBy(f => f.fiEntryDate);
if ( folderItems.Any() )
{
xw.WriteStartElement("xDataDef");
xw.WriteAttributeString("id-auth", p);
xw.WriteStartElement("FolderItems");
WriteFolderItems(profileDB, datalockerConnectionString, xw, folderItems, documentsDirectory, calcDocumentFolder, exportFileData);
xw.WriteEndElement();
xw.WriteEndElement();
xw.Flush();
}
}
xw.WriteEndElement();
}
WriteFolderItems has looping code as well that looks like the following. A few notes:
1) I'd expect the foreach( var f in folderItems ) to be streaming
2) For some of the FolderItem rows that are Xml blobs of cached documents, I need to run ~ 1-5 queries against the database to get some additional information to stick into the Xml export: var docInfo = profileDB.Documents.Where( d => d.docfiKey == f.fiKey && d.docFilename == fileName ).FirstOrDefault();
3) I call XmlWriter.Flush after each FolderItem row.
public void WriteFolderItems( BTR.Evolution.Data.DataContexts.Legacy.xDS.DataContext profileDB, string datalockerConnectionString, XmlWriter xw, IEnumerable<BTR.Evolution.Data.DataContexts.Legacy.xDS.FolderItem> folderItems, string documentsOutputDirectory, string calcDocumentFolder, bool exportFileData )
{
foreach( var f in folderItems )
{
// The Xml blob string
var calculation = XElement.Parse( f.fiItem );
// If it contains 'cached-document' elements, need to download the actual document from DataLocker database
foreach( var document in calculation.Elements( "Data" ).Elements( "TabDef" ).Elements( "cache-documents" ).Elements( "cached-document" ) )
{
var fileName = (string)document.Attribute( "name" );
// Get author/token to be used during import
var docInfo = profileDB.Documents.Where( d => d.docfiKey == f.fiKey && d.docFilename == fileName ).FirstOrDefault();
if ( docInfo != null )
{
document.Add( new XElement( "author", docInfo.docUploadAuthID ) );
document.Add( new XElement( "token", docInfo.docDataLockerToken ) );
}
// Export associated document from DataLocker connection...XmlWriter is not affected, simply saves document to local hard drive
if ( exportFileData && DataLockerExtensions.ByConnection( datalockerConnectionString ).Exists( calcDocumentFolder, (string)document.Attribute( "name" ), null ) )
{
using ( var fs = new FileStream( Path.Combine( documentsOutputDirectory, fileName.Replace( "/", "__" ) ), FileMode.Create ) )
{
string contentType;
using ( var ds = DataLockerExtensions.ByConnection( datalockerConnectionString ).Get( calcDocumentFolder, (string)document.Attribute( "name" ), null, out contentType ) )
{
ds.CopyTo( fs );
}
}
}
}
// Write the calculation to the XwlWriter
xw.WriteStartElement( "FolderItem" );
xw.WriteElementString( "Key", f.fiKey.ToString() );
xw.WriteElementString( "EntryDate", XmlConvert.ToString( f.fiEntryDate.Value, XmlDateTimeSerializationMode.Local ) );
xw.WriteElementString( "ItemType", f.fiEntryType );
xw.WriteElementString( "Author", f.fiAuthor );
xw.WriteElementString( "Comment", f.fiComment );
xw.WriteStartElement( "Item" );
calculation.WriteTo( xw );
xw.WriteEndElement();
xw.WriteEndElement();
xw.Flush();
}
}
Make sure you disable Change Tracking, or the EF or L2S Change Tracker will retain references to each of the loaded entities.

Non standard ZCL package support for CC253x firmware

My problem is:
Livolo switches have their own Zigbee gate. I want to connect them from zigbee2mqtt with CC2531 USB dongle. In general it works, but when I turn on/off switch button (on physical device), the switch sends an incorrect ZCL package.
I'm an absolutely newbie in microcontrollers programming and in Zigbee architecture. So I hope someone can help me and answer these questions:
Where I can intercept that malformed package?
How I can fix that package to support the Zigbee standard?
I use Z-STACK-HOME 1.2.2a firmware and compile it as described there:
https://github.com/Koenkk/Z-Stack-firmware/blob/master/coordinator/Z-Stack_Home_1.2/COMPILE.md
// Malformed ZCL package
// header
0x7c, // [0111 1100]
// 01 - frame type = "Command is specific or local to a cluster"
// 1 - manufacturer spec = manufacturer code present
// 1 - direction = "from server to client"
// 1 - disable default response
// 100 - reserved
0xd2, 0x15, // manufacturer
0xd8, // sequence
0x00, // read attrs command
// endpoint adress
0x12, 0x0f, 0x05, 0x18, 0x00, 0x4b, 0x12, 0x00,
0x22, 0x00, // ????? need more data from another switches
0x03 // 0x00|0x01|0x02|0x03 - switch state
upd:
I think, what I can intercept message in AF.c file in afIncomingData function and fix in afBuildMSGIncoming.
So now I hope, someone can help me with the right message format. Which can be processed standard ZCL parcer.
void afIncomingData( aps_FrameFormat_t *aff, zAddrType_t *SrcAddress, uint16 SrcPanId,
NLDE_Signal_t *sig, uint8 nwkSeqNum, uint8 SecurityUse,
uint32 timestamp, uint8 radius )
{
endPointDesc_t *epDesc = NULL;
epList_t *pList = epList;
#if !defined ( APS_NO_GROUPS )
uint8 grpEp = APS_GROUPS_EP_NOT_FOUND;
#endif
if ( ((aff->FrmCtrl & APS_DELIVERYMODE_MASK) == APS_FC_DM_GROUP) )
{
#if !defined ( APS_NO_GROUPS )
// Find the first endpoint for this group
grpEp = aps_FindGroupForEndpoint( aff->GroupID, APS_GROUPS_FIND_FIRST );
if ( grpEp == APS_GROUPS_EP_NOT_FOUND ) {
// No endpoint found, default to endpoint 1.
// In the original source code there is a return here.
// This prevent the messags from being forwarded.
// For our use-case we want to capture all messages.
// Even if the coordinator is not in the group.
epDesc = afFindEndPointDesc( 1 );
}
else {
epDesc = afFindEndPointDesc( grpEp );
}
if ( epDesc == NULL )
return; // Endpoint descriptor not found
pList = afFindEndPointDescList( epDesc->endPoint );
#else
return; // Not supported
#endif
}
else if ( aff->DstEndPoint == AF_BROADCAST_ENDPOINT )
{
// Set the list
if ( pList != NULL )
{
epDesc = pList->epDesc;
}
}
else if ( aff->DstEndPoint == 10 || aff->DstEndPoint == 11 ) {
if ( (epDesc = afFindEndPointDesc( 1 )) )
{
pList = afFindEndPointDescList( epDesc->endPoint );
}
}
else if ( (epDesc = afFindEndPointDesc( aff->DstEndPoint )) )
{
pList = afFindEndPointDescList( epDesc->endPoint );
}
while ( epDesc )
{
uint16 epProfileID = 0xFFFE; // Invalid Profile ID
if ( pList->pfnDescCB )
{
uint16 *pID = (uint16 *)(pList->pfnDescCB(
AF_DESCRIPTOR_PROFILE_ID, epDesc->endPoint ));
if ( pID )
{
epProfileID = *pID;
osal_mem_free( pID );
}
}
else if ( epDesc->simpleDesc )
{
epProfileID = epDesc->simpleDesc->AppProfId;
}
// First part of verification is to make sure that:
// the local Endpoint ProfileID matches the received ProfileID OR
// the message is specifically send to ZDO (this excludes the broadcast endpoint) OR
// if the Wildcard ProfileID is received the message should not be sent to ZDO endpoint
if ( (aff->ProfileID == epProfileID) ||
((epDesc->endPoint == ZDO_EP) && (aff->ProfileID == ZDO_PROFILE_ID)) ||
((epDesc->endPoint != ZDO_EP) && ( aff->ProfileID == ZDO_WILDCARD_PROFILE_ID )) )
{
// Save original endpoint
uint8 endpoint = aff->DstEndPoint;
// overwrite with descriptor's endpoint
aff->DstEndPoint = epDesc->endPoint;
afBuildMSGIncoming( aff, epDesc, SrcAddress, SrcPanId, sig,
nwkSeqNum, SecurityUse, timestamp, radius );
// Restore with original endpoint
aff->DstEndPoint = endpoint;
}
if ( ((aff->FrmCtrl & APS_DELIVERYMODE_MASK) == APS_FC_DM_GROUP) )
{
#if !defined ( APS_NO_GROUPS )
// Find the next endpoint for this group
grpEp = aps_FindGroupForEndpoint( aff->GroupID, grpEp );
if ( grpEp == APS_GROUPS_EP_NOT_FOUND )
return; // No endpoint found
epDesc = afFindEndPointDesc( grpEp );
if ( epDesc == NULL )
return; // Endpoint descriptor not found
pList = afFindEndPointDescList( epDesc->endPoint );
#else
return;
#endif
}
else if ( aff->DstEndPoint == AF_BROADCAST_ENDPOINT )
{
pList = pList->nextDesc;
if ( pList )
epDesc = pList->epDesc;
else
epDesc = NULL;
}
else
epDesc = NULL;
}
}
static void afBuildMSGIncoming( aps_FrameFormat_t *aff, endPointDesc_t *epDesc,
zAddrType_t *SrcAddress, uint16 SrcPanId, NLDE_Signal_t *sig,
uint8 nwkSeqNum, uint8 SecurityUse, uint32 timestamp, uint8 radius )
{
afIncomingMSGPacket_t *MSGpkt;
const uint8 len = sizeof( afIncomingMSGPacket_t ) + aff->asduLength;
uint8 *asdu = aff->asdu;
MSGpkt = (afIncomingMSGPacket_t *)osal_msg_allocate( len );
if ( MSGpkt == NULL )
{
return;
}
MSGpkt->hdr.event = AF_INCOMING_MSG_CMD;
MSGpkt->groupId = aff->GroupID;
MSGpkt->clusterId = aff->ClusterID;
afCopyAddress( &MSGpkt->srcAddr, SrcAddress );
MSGpkt->srcAddr.endPoint = aff->SrcEndPoint;
MSGpkt->endPoint = epDesc->endPoint;
MSGpkt->wasBroadcast = aff->wasBroadcast;
MSGpkt->LinkQuality = sig->LinkQuality;
MSGpkt->correlation = sig->correlation;
MSGpkt->rssi = sig->rssi;
MSGpkt->SecurityUse = SecurityUse;
MSGpkt->timestamp = timestamp;
MSGpkt->nwkSeqNum = nwkSeqNum;
MSGpkt->macSrcAddr = aff->macSrcAddr;
MSGpkt->macDestAddr = aff->macDestAddr;
MSGpkt->srcAddr.panId = SrcPanId;
MSGpkt->cmd.TransSeqNumber = 0;
MSGpkt->cmd.DataLength = aff->asduLength;
MSGpkt->radius = radius;
if ( MSGpkt->cmd.DataLength )
{
MSGpkt->cmd.Data = (uint8 *)(MSGpkt + 1);
osal_memcpy( MSGpkt->cmd.Data, asdu, MSGpkt->cmd.DataLength );
}
else
{
MSGpkt->cmd.Data = NULL;
}
#if defined ( MT_AF_CB_FUNC )
// If ZDO or SAPI have registered for this endpoint, dont intercept it here
if (AFCB_CHECK(CB_ID_AF_DATA_IND, *(epDesc->task_id)))
{
MT_AfIncomingMsg( (void *)MSGpkt );
// Release the memory.
osal_msg_deallocate( (void *)MSGpkt );
}
else
#endif
{
// Send message through task message.
osal_msg_send( *(epDesc->task_id), (uint8 *)MSGpkt );
}
}
I don't think you're decoding the frame control byte correctly. Looking at some code I've written, I interpret it as follows:
0x7c, // [0111 1100]
// 011 - reserved
// 1 - disable default response
// 1 - direction = "from server to client"
// 1 - manufacturer spec = manufacturer code present
// 00 - frame type = "Command acts across entire profile"
This is based on an old ZCL spec (around 2008?) and perhaps the reserved bits have taken on some meaning in a later version of the spec.
I believe the manufacturer specific bit indicates that this is not a standard Zigbee command (Read Attributes). If it was Read Attributes, I think it should have an even number of bytes following it (16-bit attribute IDs).
What were the source and destination endpoints, profile ID and cluster ID for that frame?
Update:
It looks like you could modify afIncomingData() to look at the fields of aff to identify this exact message type (frame control, endpoints, cluster, profile), and then hand it off to your own function for processing (and responding if necessary).
But I'd also take a look at documentation for MT_AF_CB_FUNC and MT_AfIncomingMsg() to see if that's the "official" way to identify frames that you want to handle in your own code.
In AF.c file in functions afIncomingData and afBuildMSGIncoming added bloks marked by
#if defined ( LIVOLO_SUPPORT )
#endif
In afIncomingData I add:
#if defined ( LIVOLO_SUPPORT )
else if ( aff->DstEndPoint == 0x08 )
{
if ( (epDesc = afFindEndPointDesc( 1 )) )
{
pList = afFindEndPointDescList( epDesc->endPoint );
}
}
#endif
It's prewent filtering message sended to unknown destEndpoint
And in afBuildMSGIncoming function:
#if defined ( LIVOLO_SUPPORT )
uint8 fixedPackage[] = {
0x18, 0xd8, 0x01, // header
0x00, 0x00, // attrId
0x00, // success
0x10, // boolean
0x00
};
if (aff->SrcEndPoint == 0x06 && aff->DstEndPoint == 0x01
&& aff->ClusterID == 0x0001 && aff->ProfileID == 0x0104) {
const uint8 mlfrmdHdr[] = { 0x7c, 0xd2, 0x15, 0xd8, 0x00 };
if (osal_memcmp(asdu, mlfrmdHdr, 5) == TRUE) {
fixedPackage[7] = asdu[aff->asduLength - 1];
MSGpkt->cmd.DataLength = 8; // sizeof(fixedPackage)
MSGpkt->clusterId = 0x06; // genOnOff
asdu = fixedPackage;
}
}
#endif
It change unsupported package to readAttrResp package.

The third internalFormat of qglTexImage2D in Quake2

From Quake2 source, the function GL_BeginBuildingLightmaps at gl_rsufr.c, i saw these codes:
if ( toupper( gl_monolightmap->string[0] ) == 'A' )
{
gl_lms.internal_format = gl_tex_alpha_format;
}
/*
** try to do hacked colored lighting with a blended texture
*/
else if ( toupper( gl_monolightmap->string[0] ) == 'C' )
{
gl_lms.internal_format = gl_tex_alpha_format;
}
else if ( toupper( gl_monolightmap->string[0] ) == 'I' )
{
gl_lms.internal_format = GL_INTENSITY8;
}
else if ( toupper( gl_monolightmap->string[0] ) == 'L' )
{
gl_lms.internal_format = GL_LUMINANCE8;
}
else
{
gl_lms.internal_format = gl_tex_solid_format;
}
GL_Bind( gl_state.lightmap_textures + 0 );
qglTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
qglTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
qglTexImage2D( GL_TEXTURE_2D,
0,
gl_lms.internal_format,
BLOCK_WIDTH, BLOCK_HEIGHT,
0,
GL_LIGHTMAP_FORMAT,
GL_UNSIGNED_BYTE,
dummy );
qglTexImage2D is same as glTexImage2D.
The problem is from debugging i saw the input value of third parameter(internalFormat) of qglTexImage2D is gl_tex_solid_format, which is 3. Is 3 a valid value for parameter internalFormat?
3 is a perfectly legitimate value for internalFormat.
From the the glTexImage2D() documentation:
internalFormat: Specifies the number of color components in the texture. Must be 1, 2, 3, or 4, or one of the following symbolic constants: ...
Where is the value of the variable gl_tex_solid_format from? Are you sure you have assigned GL_RGBA to the variable gl_tex_solid_format? Maybe you assigned 3 to the variable gl_tex_solid_format.

Tool for discovering library dependencies based on missing symbols

I'm working on a 20 year old project with some ... interesting problems, among them: there's some shared objects with circular dependencies.
I'm attempting to map out the relationships between all the libraries, but it would be rather helpful if there's an existing tool capable of searching a list of libraries to see what can satisfy the missing dependencies.
For reference, they got around the problem by doing something like the following:
# True list of dependencies:
A: B
B: A
C: A
# Dependencies used in practice:
A:
B: A
C: A B
I haven't tested the following code, since I've just attempted to re-write this from memory, but the one I wrote to solve this earlier (it looks roughly like this one) works fine:
#!/usr/bin/env perl
using IPC::Open3;
my $queryFile = $ARGV[0];
shift;
my %missingSymbols = getSymbols( "nm -Aau", $queryFile );
my %symtbl;
foreach $lib ( #ARGV ) {
my %temp = getSymbols( "nm -Aa --defined-only", $lib );
foreach $key ( keys( %temp ) ) {
$symtbl{$key} = (defined($symtbl{$key}) ? "${symtbl{$key}} " : "")
. $temp{$key};
}
}
my %dependencies;
foreach $symbol ( keys( %missingSymbols ) ) {
if( defined( $symtbl{$symbol} ) ) {
foreach $lib ( split( / +/, $symtbl{$symbol} ) ) {
if( !defined( $dependencies{$lib} ) ) {
$dependencies{$lib} = 1;
}
}
}
}
if( scalar( keys( %dependencies ) ) > 0 ) {
print( "$queryFile depends on one or more of the following libs:\n\n" );
print join( "\n", sort( keys( %dependencies ) ) ) . "\n\n";
} else {
print( "Unable to resolve dependencies for $queryFile.\n\n" );
}
# Done.
sub getSymbols {
my $cmd = shift;
my $fname = shift;
checkFileType( $fname );
open3( IN, OUT, ERR, "$cmd $fname" );
my %symhash;
close( IN );
# If you leave ERR open and nm prints to STDERR, reads from
# OUT can block. You could make reads from both handles be
# non-blocking so you could examine STDERR if needed, but I
# don't need to.
close( ERR );
while( <OUT> ) {
chomp;
if( m/^(?:[^:]*:)+[a-zA-Z0-9]*\s*[a-zA-Z] ([^\s]+)$/ ) {
my $temp = defined( $symhash{$1} ) ? "${symhash{$1}} " : "";
$symhash{$1} = $temp . $fname;
}
}
close( OUT );
return %symhash;
}
sub checkFileType {
my $fname = shift;
die "$fname does not exist\n" if( ! -e $fname );
die "Not an ELF or archive file\n" if( `file $fname` !~ m/ELF| ar archive/ );
}

Resources