KLL Compiler
Você não pode selecionar mais de 25 tópicos Os tópicos devem começar com uma letra ou um número, podem incluir traços ('-') e podem ter até 35 caracteres.
Este repositório está arquivado. Você pode visualizar os arquivos e realizar clone, mas não poderá realizar push nem abrir issues e pull requests.

containers.py 15KB

KLL Compiler Re-Write This was many months of efforts in re-designing how the KLL compiler should work. The major problem with the original compiler was how difficult it was to extend language wise. This lead to many delays in KLL 0.4 and 0.5 being implemented. The new design is a multi-staged compiler, where even tokenization occurs over multiple stages. This allows individual parsing and token regexes to be expressed more simply without affect other expressions. Another area of change is the concept of Contexts. In the original KLL compiler the idea of a cache assigned was "hacked" on when I realized the language was "broken" (after nearly finishing the compiler). Since assignment order is generally considered not to matter for keymappings, I created a "cached" assignment where the whole file is read into a sub-datastructure, then apply to the master datastructure. Unfortunately, this wasn't really all that clear, so it was annoying to work with. To remedy this, I created KLL Contexts, which contain information about a group of expressions. Not only can these groups can be merged with other Contexts, they have historical data about how they were generated allowing for errors very late in processing to be pin-pointed back to the offending kll file. Backends work nearly the same as they did before. However, all call-backs for capability evaluations have been removed. This makes the interface much cleaner as Contexts can only be symbolically merged now. (Previously datastructures did evaluation merges where the ScanCode or Capability was looked up right before passing to the backend, but this required additional information from the backend). Many of the old parsing and tokenization rules have been reused, along with the hid_dict.py code. The new design takes advantage of processor pools to handle multithreading where it makes sense. For example, all specified files are loaded into ram simulatenously rather than sparingly reading from. The reason for this is so that each Context always has all the information it requires at all times. kll - Program entry point (previously kll.py) - Very small now, does some setting up of command-line args - Most command-line args are specified by the corresponding processing stage common/channel.py - Pixel Channel container classes common/context.py - Context container classes - As is usual with other files, blank classes inherit a base class - These blank classes are identified by the class name itself to handle special behaviour - And if/when necessary functions are re-implemented - MergeConext class facilitates merging of contexts while maintaining lineage common/expression.py - Expression container classes * Expression base class * AssignmentExpression * NameAssociationExpression * DataAssociationExpression * MapExpression - These classes are used to store expressions after they have finished parsing and tokenization common/file.py - Container class for files being read by the KLL compiler common/emitter.py - Base class for all KLL emitters - TextEmitter for dealing with text file templates common/hid_dict.py - Slightly modified version of kll_lib/hid_dict.py common/id.py - Identification container classes - Used to indentify different types of elements used within the KLL language common/modifier.py - Container classes for animation and pixel change functions common/organization.py - Data structure merging container classes - Contains all the sub-datastructure classes as well - The Organization class handles the merge orchestration and expression insertion common/parse.py - Parsing rules for funcparserlib - Much of this file was taken from the original kll.py - Many changes to support the multi-stage processing and support KLL 0.5 common/position.py - Container class dealing with physical positions common/schedule.py - Container class dealing with scheduling and timing events common/stage.py - Contains ControlStage and main Stage classes * CompilerConfigurationStage * FileImportStage * PreprocessorStage * OperationClassificationStage * OperationSpecificsStage * OperationOrganizationStage * DataOrganziationStage * DataFinalizationStage * DataAnalysisStage * CodeGenerationStage * ReportGenerationStage - Each of these classes controls the life-cycle of each stage - If multi-threading is desired, it must be handled within the class * The next stage will not start until the current stage is finished - Errors are handled such that as many errors as possible are recorded before forcing an exit * The exit is handled at the end of each stage if necessary - Command-line arguments for each stage can be defined if necessary (they are given their own grouping) - Each stage can pull variables and functions from other stages if necessary using a name lookup * This means you don't have to worry about over-arching datastructures emitters/emitters.py - Container class for KLL emitters - Handles emitter setup and selection emitters/kiibohd/kiibohd.py - kiibohd .h file KLL emitter - Re-uses some backend code from the original KLL compiler funcparserlib/parser.py - Added debug mode control examples/assignment.kll examples/defaultMapExample.kll examples/example.kll examples/hhkbpro2.kll examples/leds.kll examples/mapping.kll examples/simple1.kll examples/simple2.kll examples/simpleExample.kll examples/state_scheduling.kll - Updating/Adding rules for new compiler and KLL 0.4 + KLL 0.5 support
7 anos atrás
KLL Compiler Re-Write This was many months of efforts in re-designing how the KLL compiler should work. The major problem with the original compiler was how difficult it was to extend language wise. This lead to many delays in KLL 0.4 and 0.5 being implemented. The new design is a multi-staged compiler, where even tokenization occurs over multiple stages. This allows individual parsing and token regexes to be expressed more simply without affect other expressions. Another area of change is the concept of Contexts. In the original KLL compiler the idea of a cache assigned was "hacked" on when I realized the language was "broken" (after nearly finishing the compiler). Since assignment order is generally considered not to matter for keymappings, I created a "cached" assignment where the whole file is read into a sub-datastructure, then apply to the master datastructure. Unfortunately, this wasn't really all that clear, so it was annoying to work with. To remedy this, I created KLL Contexts, which contain information about a group of expressions. Not only can these groups can be merged with other Contexts, they have historical data about how they were generated allowing for errors very late in processing to be pin-pointed back to the offending kll file. Backends work nearly the same as they did before. However, all call-backs for capability evaluations have been removed. This makes the interface much cleaner as Contexts can only be symbolically merged now. (Previously datastructures did evaluation merges where the ScanCode or Capability was looked up right before passing to the backend, but this required additional information from the backend). Many of the old parsing and tokenization rules have been reused, along with the hid_dict.py code. The new design takes advantage of processor pools to handle multithreading where it makes sense. For example, all specified files are loaded into ram simulatenously rather than sparingly reading from. The reason for this is so that each Context always has all the information it requires at all times. kll - Program entry point (previously kll.py) - Very small now, does some setting up of command-line args - Most command-line args are specified by the corresponding processing stage common/channel.py - Pixel Channel container classes common/context.py - Context container classes - As is usual with other files, blank classes inherit a base class - These blank classes are identified by the class name itself to handle special behaviour - And if/when necessary functions are re-implemented - MergeConext class facilitates merging of contexts while maintaining lineage common/expression.py - Expression container classes * Expression base class * AssignmentExpression * NameAssociationExpression * DataAssociationExpression * MapExpression - These classes are used to store expressions after they have finished parsing and tokenization common/file.py - Container class for files being read by the KLL compiler common/emitter.py - Base class for all KLL emitters - TextEmitter for dealing with text file templates common/hid_dict.py - Slightly modified version of kll_lib/hid_dict.py common/id.py - Identification container classes - Used to indentify different types of elements used within the KLL language common/modifier.py - Container classes for animation and pixel change functions common/organization.py - Data structure merging container classes - Contains all the sub-datastructure classes as well - The Organization class handles the merge orchestration and expression insertion common/parse.py - Parsing rules for funcparserlib - Much of this file was taken from the original kll.py - Many changes to support the multi-stage processing and support KLL 0.5 common/position.py - Container class dealing with physical positions common/schedule.py - Container class dealing with scheduling and timing events common/stage.py - Contains ControlStage and main Stage classes * CompilerConfigurationStage * FileImportStage * PreprocessorStage * OperationClassificationStage * OperationSpecificsStage * OperationOrganizationStage * DataOrganziationStage * DataFinalizationStage * DataAnalysisStage * CodeGenerationStage * ReportGenerationStage - Each of these classes controls the life-cycle of each stage - If multi-threading is desired, it must be handled within the class * The next stage will not start until the current stage is finished - Errors are handled such that as many errors as possible are recorded before forcing an exit * The exit is handled at the end of each stage if necessary - Command-line arguments for each stage can be defined if necessary (they are given their own grouping) - Each stage can pull variables and functions from other stages if necessary using a name lookup * This means you don't have to worry about over-arching datastructures emitters/emitters.py - Container class for KLL emitters - Handles emitter setup and selection emitters/kiibohd/kiibohd.py - kiibohd .h file KLL emitter - Re-uses some backend code from the original KLL compiler funcparserlib/parser.py - Added debug mode control examples/assignment.kll examples/defaultMapExample.kll examples/example.kll examples/hhkbpro2.kll examples/leds.kll examples/mapping.kll examples/simple1.kll examples/simple2.kll examples/simpleExample.kll examples/state_scheduling.kll - Updating/Adding rules for new compiler and KLL 0.4 + KLL 0.5 support
7 anos atrás
KLL Compiler Re-Write This was many months of efforts in re-designing how the KLL compiler should work. The major problem with the original compiler was how difficult it was to extend language wise. This lead to many delays in KLL 0.4 and 0.5 being implemented. The new design is a multi-staged compiler, where even tokenization occurs over multiple stages. This allows individual parsing and token regexes to be expressed more simply without affect other expressions. Another area of change is the concept of Contexts. In the original KLL compiler the idea of a cache assigned was "hacked" on when I realized the language was "broken" (after nearly finishing the compiler). Since assignment order is generally considered not to matter for keymappings, I created a "cached" assignment where the whole file is read into a sub-datastructure, then apply to the master datastructure. Unfortunately, this wasn't really all that clear, so it was annoying to work with. To remedy this, I created KLL Contexts, which contain information about a group of expressions. Not only can these groups can be merged with other Contexts, they have historical data about how they were generated allowing for errors very late in processing to be pin-pointed back to the offending kll file. Backends work nearly the same as they did before. However, all call-backs for capability evaluations have been removed. This makes the interface much cleaner as Contexts can only be symbolically merged now. (Previously datastructures did evaluation merges where the ScanCode or Capability was looked up right before passing to the backend, but this required additional information from the backend). Many of the old parsing and tokenization rules have been reused, along with the hid_dict.py code. The new design takes advantage of processor pools to handle multithreading where it makes sense. For example, all specified files are loaded into ram simulatenously rather than sparingly reading from. The reason for this is so that each Context always has all the information it requires at all times. kll - Program entry point (previously kll.py) - Very small now, does some setting up of command-line args - Most command-line args are specified by the corresponding processing stage common/channel.py - Pixel Channel container classes common/context.py - Context container classes - As is usual with other files, blank classes inherit a base class - These blank classes are identified by the class name itself to handle special behaviour - And if/when necessary functions are re-implemented - MergeConext class facilitates merging of contexts while maintaining lineage common/expression.py - Expression container classes * Expression base class * AssignmentExpression * NameAssociationExpression * DataAssociationExpression * MapExpression - These classes are used to store expressions after they have finished parsing and tokenization common/file.py - Container class for files being read by the KLL compiler common/emitter.py - Base class for all KLL emitters - TextEmitter for dealing with text file templates common/hid_dict.py - Slightly modified version of kll_lib/hid_dict.py common/id.py - Identification container classes - Used to indentify different types of elements used within the KLL language common/modifier.py - Container classes for animation and pixel change functions common/organization.py - Data structure merging container classes - Contains all the sub-datastructure classes as well - The Organization class handles the merge orchestration and expression insertion common/parse.py - Parsing rules for funcparserlib - Much of this file was taken from the original kll.py - Many changes to support the multi-stage processing and support KLL 0.5 common/position.py - Container class dealing with physical positions common/schedule.py - Container class dealing with scheduling and timing events common/stage.py - Contains ControlStage and main Stage classes * CompilerConfigurationStage * FileImportStage * PreprocessorStage * OperationClassificationStage * OperationSpecificsStage * OperationOrganizationStage * DataOrganziationStage * DataFinalizationStage * DataAnalysisStage * CodeGenerationStage * ReportGenerationStage - Each of these classes controls the life-cycle of each stage - If multi-threading is desired, it must be handled within the class * The next stage will not start until the current stage is finished - Errors are handled such that as many errors as possible are recorded before forcing an exit * The exit is handled at the end of each stage if necessary - Command-line arguments for each stage can be defined if necessary (they are given their own grouping) - Each stage can pull variables and functions from other stages if necessary using a name lookup * This means you don't have to worry about over-arching datastructures emitters/emitters.py - Container class for KLL emitters - Handles emitter setup and selection emitters/kiibohd/kiibohd.py - kiibohd .h file KLL emitter - Re-uses some backend code from the original KLL compiler funcparserlib/parser.py - Added debug mode control examples/assignment.kll examples/defaultMapExample.kll examples/example.kll examples/hhkbpro2.kll examples/leds.kll examples/mapping.kll examples/simple1.kll examples/simple2.kll examples/simpleExample.kll examples/state_scheduling.kll - Updating/Adding rules for new compiler and KLL 0.4 + KLL 0.5 support
7 anos atrás
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463
  1. #!/usr/bin/env python3
  2. # KLL Compiler Containers
  3. #
  4. # Copyright (C) 2014-2016 by Jacob Alexander
  5. #
  6. # This file is free software: you can redistribute it and/or modify
  7. # it under the terms of the GNU General Public License as published by
  8. # the Free Software Foundation, either version 3 of the License, or
  9. # (at your option) any later version.
  10. #
  11. # This file is distributed in the hope that it will be useful,
  12. # but WITHOUT ANY WARRANTY; without even the implied warranty of
  13. # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
  14. # GNU General Public License for more details.
  15. #
  16. # You should have received a copy of the GNU General Public License
  17. # along with this file. If not, see <http://www.gnu.org/licenses/>.
  18. ### Imports ###
  19. import copy
  20. ### Decorators ###
  21. ## Print Decorator Variables
  22. ERROR = '\033[5;1;31mERROR\033[0m:'
  23. ### Parsing ###
  24. ## Containers
  25. class ScanCode:
  26. # Container for ScanCodes
  27. #
  28. # scancode - Non-interconnect adjusted scan code
  29. # interconnect_id - Unique id for the interconnect node
  30. def __init__( self, scancode, interconnect_id ):
  31. self.scancode = scancode
  32. self.interconnect_id = interconnect_id
  33. def __eq__( self, other ):
  34. return self.dict() == other.dict()
  35. def __repr__( self ):
  36. return repr( self.dict() )
  37. def dict( self ):
  38. return {
  39. 'ScanCode' : self.scancode,
  40. 'Id' : self.interconnect_id,
  41. }
  42. # Calculate the actual scancode using the offset list
  43. def offset( self, offsetList ):
  44. if self.interconnect_id > 0:
  45. return self.scancode + offsetList[ self.interconnect_id - 1 ]
  46. else:
  47. return self.scancode
  48. class ScanCodeStore:
  49. # Unique lookup for ScanCodes
  50. def __init__( self ):
  51. self.scancodes = []
  52. def __getitem__( self, name ):
  53. # First check if this is a ScanCode object
  54. if isinstance( name, ScanCode ):
  55. # Do a reverse lookup
  56. for idx, scancode in enumerate( self.scancodes ):
  57. if scancode == name:
  58. return idx
  59. # Could not find scancode
  60. return None
  61. # Return scancode using unique id
  62. return self.scancodes[ name ]
  63. # Attempt add ScanCode to list, return unique id
  64. def append( self, new_scancode ):
  65. # Iterate through list to make sure this is a unique ScanCode
  66. for idx, scancode in enumerate( self.scancodes ):
  67. if new_scancode == scancode:
  68. return idx
  69. # Unique entry, add to the list
  70. self.scancodes.append( new_scancode )
  71. return len( self.scancodes ) - 1
  72. class Capabilities:
  73. # Container for capabilities dictionary and convenience functions
  74. def __init__( self ):
  75. self.capabilities = dict()
  76. def __getitem__( self, name ):
  77. return self.capabilities[ name ]
  78. def __setitem__( self, name, contents ):
  79. self.capabilities[ name ] = contents
  80. def __repr__( self ):
  81. return "Capabilities => {0}\nIndexed Capabilities => {1}".format( self.capabilities, sorted( self.capabilities, key = self.capabilities.get ) )
  82. # Total bytes needed to store arguments
  83. def totalArgBytes( self, name ):
  84. totalBytes = 0
  85. # Iterate over the arguments, summing the total bytes
  86. for arg in self.capabilities[ name ][ 1 ]:
  87. totalBytes += int( arg[ 1 ] )
  88. return totalBytes
  89. # Name of the capability function
  90. def funcName( self, name ):
  91. return self.capabilities[ name ][ 0 ]
  92. # Only valid while dictionary keys are not added/removed
  93. def getIndex( self, name ):
  94. return sorted( self.capabilities, key = self.capabilities.get ).index( name )
  95. def getName( self, index ):
  96. return sorted( self.capabilities, key = self.capabilities.get )[ index ]
  97. def keys( self ):
  98. return sorted( self.capabilities, key = self.capabilities.get )
  99. class Macros:
  100. # Container for Trigger Macro : Result Macro correlation
  101. # Layer selection for generating TriggerLists
  102. #
  103. # Only convert USB Code list once all the ResultMacros have been accumulated (does a macro reduction; not reversible)
  104. # Two staged list for ResultMacros:
  105. # 1) USB Code/Non-converted (may contain capabilities)
  106. # 2) Capabilities
  107. def __init__( self ):
  108. # Default layer (0)
  109. self.layer = 0
  110. # Unique ScanCode Hash Id Lookup
  111. self.scanCodeStore = ScanCodeStore()
  112. # Macro Storage
  113. self.macros = [ dict() ]
  114. # Base Layout Storage
  115. self.baseLayout = None
  116. self.layerLayoutMarkers = []
  117. # Correlated Macro Data
  118. self.resultsIndex = dict()
  119. self.triggersIndex = dict()
  120. self.resultsIndexSorted = []
  121. self.triggersIndexSorted = []
  122. self.triggerList = []
  123. self.maxScanCode = []
  124. self.firstScanCode = []
  125. self.interconnectOffset = []
  126. # USBCode Assignment Cache
  127. self.assignmentCache = []
  128. def __repr__( self ):
  129. return "{0}".format( self.macros )
  130. def completeBaseLayout( self ):
  131. # Copy base layout for later use when creating partial layers and add marker
  132. self.baseLayout = copy.deepcopy( self.macros[ 0 ] )
  133. self.layerLayoutMarkers.append( copy.deepcopy( self.baseLayout ) ) # Not used for default layer, just simplifies coding
  134. def removeUnmarked( self ):
  135. # Remove all of the unmarked mappings from the partial layer
  136. for trigger in self.layerLayoutMarkers[ self.layer ].keys():
  137. del self.macros[ self.layer ][ trigger ]
  138. def addLayer( self ):
  139. # Increment layer count, and append another macros dictionary
  140. self.layer += 1
  141. self.macros.append( copy.deepcopy( self.baseLayout ) )
  142. # Add a layout marker for each layer
  143. self.layerLayoutMarkers.append( copy.deepcopy( self.baseLayout ) )
  144. # Use for ScanCode trigger macros
  145. def appendScanCode( self, trigger, result ):
  146. if not trigger in self.macros[ self.layer ]:
  147. self.replaceScanCode( trigger, result )
  148. else:
  149. self.macros[ self.layer ][ trigger ].append( result )
  150. # Remove the given trigger/result pair
  151. def removeScanCode( self, trigger, result ):
  152. # Remove all instances of the given trigger/result pair
  153. while result in self.macros[ self.layer ][ trigger ]:
  154. self.macros[ self.layer ][ trigger ].remove( result )
  155. # Replaces the given trigger with the given result
  156. # If multiple results for a given trigger, clear, then add
  157. def replaceScanCode( self, trigger, result ):
  158. self.macros[ self.layer ][ trigger ] = [ result ]
  159. # Mark layer scan code, so it won't be removed later
  160. # Also check to see if it hasn't already been removed before
  161. if not self.baseLayout is None and trigger in self.layerLayoutMarkers[ self.layer ]:
  162. del self.layerLayoutMarkers[ self.layer ][ trigger ]
  163. # Return a list of ScanCode triggers with the given USB Code trigger
  164. def lookupUSBCodes( self, usbCode ):
  165. scanCodeList = []
  166. # Scan current layer for USB Codes
  167. for macro in self.macros[ self.layer ].keys():
  168. if usbCode in self.macros[ self.layer ][ macro ]:
  169. scanCodeList.append( macro )
  170. if len(scanCodeList) == 0:
  171. if len(usbCode) > 1 or len(usbCode[0]) > 1:
  172. for combo in usbCode:
  173. comboCodes = list()
  174. for key in combo:
  175. scanCode = self.lookupUSBCodes(((key,),))
  176. comboCodes.append(scanCode[0][0][0])
  177. scanCodeList.append(tuple(code for code in comboCodes))
  178. scanCodeList = [tuple(scanCodeList)]
  179. return scanCodeList
  180. # Check whether we should do soft replacement
  181. def softReplaceCheck( self, scanCode ):
  182. # First check if not the default layer
  183. if self.layer == 0:
  184. return True
  185. # Check if current layer is set the same as the BaseMap
  186. if not self.baseLayout is None and scanCode in self.layerLayoutMarkers[ self.layer ]:
  187. return False
  188. # Otherwise, allow replacement
  189. return True
  190. # Cache USBCode Assignment
  191. def cacheAssignment( self, operator, scanCode, result ):
  192. self.assignmentCache.append( [ operator, scanCode, result ] )
  193. # Assign cached USBCode Assignments
  194. def replayCachedAssignments( self ):
  195. # Iterate over each item in the assignment cache
  196. for item in self.assignmentCache:
  197. # Check operator, and choose the specified assignment action
  198. # Append Case
  199. if item[0] == ":+":
  200. self.appendScanCode( item[1], item[2] )
  201. # Remove Case
  202. elif item[0] == ":-":
  203. self.removeScanCode( item[1], item[2] )
  204. # Replace Case
  205. elif item[0] == ":" or item[0] == "::":
  206. self.replaceScanCode( item[1], item[2] )
  207. # Clear assignment cache
  208. self.assignmentCache = []
  209. # Generate/Correlate Layers
  210. def generate( self ):
  211. self.generateIndices()
  212. self.sortIndexLists()
  213. self.generateOffsetTable()
  214. self.generateTriggerLists()
  215. # Generates Index of Results and Triggers
  216. def generateIndices( self ):
  217. # Iterate over every trigger result, and add to the resultsIndex and triggersIndex
  218. for layer in range( 0, len( self.macros ) ):
  219. for trigger in self.macros[ layer ].keys():
  220. # Each trigger has a list of results
  221. for result in self.macros[ layer ][ trigger ]:
  222. # Only add, with an index, if result hasn't been added yet
  223. if not result in self.resultsIndex:
  224. self.resultsIndex[ result ] = len( self.resultsIndex )
  225. # Then add a trigger for each result, if trigger hasn't been added yet
  226. triggerItem = tuple( [ trigger, self.resultsIndex[ result ] ] )
  227. if not triggerItem in self.triggersIndex:
  228. self.triggersIndex[ triggerItem ] = len( self.triggersIndex )
  229. # Sort Index Lists using the indices rather than triggers/results
  230. def sortIndexLists( self ):
  231. self.resultsIndexSorted = [ None ] * len( self.resultsIndex )
  232. # Iterate over the resultsIndex and sort by index
  233. for result in self.resultsIndex.keys():
  234. self.resultsIndexSorted[ self.resultsIndex[ result ] ] = result
  235. self.triggersIndexSorted = [ None ] * len( self.triggersIndex )
  236. # Iterate over the triggersIndex and sort by index
  237. for trigger in self.triggersIndex.keys():
  238. self.triggersIndexSorted[ self.triggersIndex[ trigger ] ] = trigger
  239. # Generates list of offsets for each of the interconnect ids
  240. def generateOffsetTable( self ):
  241. idMaxScanCode = [ 0 ]
  242. # Iterate over each layer to get list of max scancodes associated with each interconnect id
  243. for layer in range( 0, len( self.macros ) ):
  244. # Iterate through each trigger/sequence in the layer
  245. for sequence in self.macros[ layer ].keys():
  246. # Iterate over the trigger to locate the ScanCodes
  247. for combo in sequence:
  248. # Iterate over each scancode id in the combo
  249. for scancode_id in combo:
  250. # Lookup ScanCode
  251. scancode_obj = self.scanCodeStore[ scancode_id ]
  252. # Extend list if not large enough
  253. if scancode_obj.interconnect_id >= len( idMaxScanCode ):
  254. idMaxScanCode.extend( [ 0 ] * ( scancode_obj.interconnect_id - len( idMaxScanCode ) + 1 ) )
  255. # Determine if the max seen id for this interconnect id
  256. if scancode_obj.scancode > idMaxScanCode[ scancode_obj.interconnect_id ]:
  257. idMaxScanCode[ scancode_obj.interconnect_id ] = scancode_obj.scancode
  258. # Generate interconnect offsets
  259. self.interconnectOffset = [ idMaxScanCode[0] + 1 ]
  260. for index in range( 1, len( idMaxScanCode ) ):
  261. self.interconnectOffset.append( self.interconnectOffset[ index - 1 ] + idMaxScanCode[ index ] )
  262. # Generates Trigger Lists per layer using index lists
  263. def generateTriggerLists( self ):
  264. for layer in range( 0, len( self.macros ) ):
  265. # Set max scancode to 0xFF (255)
  266. # But keep track of the actual max scancode and reduce the list size
  267. self.triggerList.append( [ [] ] * 0xFF )
  268. self.maxScanCode.append( 0x00 )
  269. # Iterate through trigger macros to locate necessary ScanCodes and corresponding triggerIndex
  270. for trigger in self.macros[ layer ].keys():
  271. for variant in range( 0, len( self.macros[ layer ][ trigger ] ) ):
  272. # Identify result index
  273. resultIndex = self.resultsIndex[ self.macros[ layer ][ trigger ][ variant ] ]
  274. # Identify trigger index
  275. triggerIndex = self.triggersIndex[ tuple( [ trigger, resultIndex ] ) ]
  276. # Iterate over the trigger to locate the ScanCodes
  277. for sequence in trigger:
  278. for combo_id in sequence:
  279. combo = self.scanCodeStore[ combo_id ].offset( self.interconnectOffset )
  280. # Append triggerIndex for each found scanCode of the Trigger List
  281. # Do not re-add if triggerIndex is already in the Trigger List
  282. if not triggerIndex in self.triggerList[ layer ][ combo ]:
  283. # Append is working strangely with list pre-initialization
  284. # Doing a 0 check replacement instead -HaaTa
  285. if len( self.triggerList[ layer ][ combo ] ) == 0:
  286. self.triggerList[ layer ][ combo ] = [ triggerIndex ]
  287. else:
  288. self.triggerList[ layer ][ combo ].append( triggerIndex )
  289. # Look for max Scan Code
  290. if combo > self.maxScanCode[ layer ]:
  291. self.maxScanCode[ layer ] = combo
  292. # Shrink triggerList to actual max size
  293. self.triggerList[ layer ] = self.triggerList[ layer ][ : self.maxScanCode[ layer ] + 1 ]
  294. # Calculate first scan code for layer, useful for uC implementations trying to save RAM
  295. firstScanCode = 0
  296. for triggerList in range( 0, len( self.triggerList[ layer ] ) ):
  297. firstScanCode = triggerList
  298. # Break if triggerList has items
  299. if len( self.triggerList[ layer ][ triggerList ] ) > 0:
  300. break;
  301. self.firstScanCode.append( firstScanCode )
  302. # Determine overall maxScanCode
  303. self.overallMaxScanCode = 0x00
  304. for maxVal in self.maxScanCode:
  305. if maxVal > self.overallMaxScanCode:
  306. self.overallMaxScanCode = maxVal
  307. class Variables:
  308. # Container for variables
  309. # Stores three sets of variables, the overall combined set, per layer, and per file
  310. def __init__( self ):
  311. # Dictionaries of variables
  312. self.baseLayout = dict()
  313. self.fileVariables = dict()
  314. self.layerVariables = [ dict() ]
  315. self.overallVariables = dict()
  316. self.defines = dict()
  317. self.currentFile = ""
  318. self.currentLayer = 0
  319. self.baseLayoutEnabled = True
  320. def baseLayoutFinished( self ):
  321. self.baseLayoutEnabled = False
  322. def setCurrentFile( self, name ):
  323. # Store using filename and current layer
  324. self.currentFile = name
  325. self.fileVariables[ name ] = dict()
  326. # If still processing BaseLayout
  327. if self.baseLayoutEnabled:
  328. if '*LayerFiles' in self.baseLayout.keys():
  329. self.baseLayout['*LayerFiles'] += [ name ]
  330. else:
  331. self.baseLayout['*LayerFiles'] = [ name ]
  332. # Set for the current layer
  333. else:
  334. if '*LayerFiles' in self.layerVariables[ self.currentLayer ].keys():
  335. self.layerVariables[ self.currentLayer ]['*LayerFiles'] += [ name ]
  336. else:
  337. self.layerVariables[ self.currentLayer ]['*LayerFiles'] = [ name ]
  338. def incrementLayer( self ):
  339. # Store using layer index
  340. self.currentLayer += 1
  341. self.layerVariables.append( dict() )
  342. def assignVariable( self, key, value ):
  343. # Overall set of variables
  344. self.overallVariables[ key ] = value
  345. # The Name variable is a special accumulation case
  346. if key == 'Name':
  347. # BaseLayout still being processed
  348. if self.baseLayoutEnabled:
  349. if '*NameStack' in self.baseLayout.keys():
  350. self.baseLayout['*NameStack'] += [ value ]
  351. else:
  352. self.baseLayout['*NameStack'] = [ value ]
  353. # Layers
  354. else:
  355. if '*NameStack' in self.layerVariables[ self.currentLayer ].keys():
  356. self.layerVariables[ self.currentLayer ]['*NameStack'] += [ value ]
  357. else:
  358. self.layerVariables[ self.currentLayer ]['*NameStack'] = [ value ]
  359. # If still processing BaseLayout
  360. if self.baseLayoutEnabled:
  361. self.baseLayout[ key ] = value
  362. # Set for the current layer
  363. else:
  364. self.layerVariables[ self.currentLayer ][ key ] = value
  365. # File context variables
  366. self.fileVariables[ self.currentFile ][ key ] = value