.Program___for_wordsnets_reordering_qhenomenology___Main(ref this.progressBarFOR_ANIMATIONS_SAVING); EXPLAIN FIRST WHAT DOES THIS CODE DO EXPLAIN LINE TO LINES
var allRows = new System.Collections.Generic.List<RowData___for_wordsnets_qhenomenology_reordering>();
var classToRow = new System.Collections.Generic.Dictionary<string, RowData___for_wordsnets_qhenomenology_reordering>(System.StringComparer.OrdinalIgnoreCase);
var tokenFrequencies = new System.Collections.Generic.Dictionary<string, int>(System.StringComparer.OrdinalIgnoreCase);
var alphabetFrequencies = new System.Collections.Generic.Dictionary<char, int>();
var tokenSet = new System.Collections.Generic.HashSet<string>();
var prefixFreq = new System.Collections.Generic.Dictionary<string, int>();
var suffixFreq = new System.Collections.Generic.Dictionary<string, int>();
var coreFreq = new System.Collections.Generic.Dictionary<string, int>();
var coOccurrence = new System.Collections.Generic.Dictionary<string, System.Collections.Generic.HashSet<string>>();
using (var adjLog = new System.IO.StreamWriter(adjacencyLogPath))
{
adjLog.WriteLine("SUBSTRING_A,SUBSTRING_B");
foreach (var kv in coOccurrence)
{
foreach (var b in kv.Value)
{
adjLog.WriteLine(kv.Key + "," + b);
}
}
}
System.Windows.Forms.MessageBox.Show("All reports (prefix, suffix, core, adjacency) written successfully.");
}// Main method
}// class
}// namespace
Let's break down this C# code step by step.
The code appears to be part of a Windows Forms application designed to process a large CSV file, likely containing data related to "WordNets" or some form of linguistic or semantic network. The core purpose is to reorder and analyze this data based on a concept referred to as "QHENOMENOLOGY". It also generates several log files containing frequency analyses of tokens, substrings, and co-occurrences.
private void: This specifies that the method is only accessible within the current class and does not return any value.
button_OPEN_WORDNETS_207000_ROWS_17_COLS_TO_REORDER_AS_PER_QHENOMENOLOGYS_Click: This is the name of the method, indicating it's an event handler for a button click, likely one that initiates the "reordering" of a "WordNets" file with 207,000 rows and 17 columns according to "QHENOMENOLOGY".
(object sender, EventArgs e): These are standard parameters for event handlers in Windows Forms. sender refers to the object that raised the event (the button in this case), and e contains event data.
NAMESPACE___for_wordsnets_reordering_qhenomenology.Program___for_wordsnets_reordering_qhenomenology.Program___for_wordsnets_reordering_qhenomenology___Main(...): This is the most crucial line within this method. It calls a static Main method within a nested class Program___for_wordsnets_reordering_qhenomenology (which itself is within another class of the same name, and then within a namespace with a very long, descriptive name). This Main method is the entry point for the "qhenomenology reordering" logic.
ref this.progressBarFOR_ANIMATIONS_SAVING: This passes a reference (ref) to a ProgressBar control named progressBarFOR_ANIMATIONS_SAVING (presumably a member of the current form) to the Main method. This allows the Main method to update the progress bar as it processes the file, providing visual feedback to the user.
// ...: This is a commented-out line. It's likely a reminder or a placeholder indicating the expected path or filename of the input CSV file that this process is designed to handle.
This is the closing brace for the button_OPEN_WORDNETS_207000_ROWS_17_COLS_TO_REORDER_AS_PER_QHENOMENOLOGYS_Click method.
Section 2: NAMESPACE___for_wordsnets_reordering_qhenomenology Namespace and Classes
This section defines the core logic for processing the WordNet data.
RowData___for_wordsnets_qhenomenology_reordering Class
This class is a data structure (a plain old C# object or POCO) designed to hold parsed information for each row of the input CSV file.
Line-by-Line Explanation:
C#
namespace NAMESPACE___for_wordsnets_reordering_qhenomenology
{
public class RowData___for_wordsnets_qhenomenology_reordering
{
public string OriginalLine;
public string PartsOfSpeech;
public int TOTAL_TOKENS_FOUND_IN_CURRENT_CLASS_ROW;
public string ClassName;
public System.Collections.Generic.HashSet<string> Dependencies;
}//public class RowData___for_wordsnets_qhenomenology_reordering
public class RowData___for_wordsnets_qhenomenology_reordering: Defines a public class to encapsulate data for a single row from the CSV.
public string OriginalLine;: Stores the complete original line read from the CSV file.
public string PartsOfSpeech;: Stores the part-of-speech tag for the entry.
public int TOTAL_TOKENS_FOUND_IN_CURRENT_CLASS_ROW;: Stores the count of tokens (words or alphanumeric sequences) found in the current row.
public string ClassName;: Stores the "class name" for the row, likely a key identifier or the primary word/concept the row represents.
public System.Collections.Generic.HashSet<string> Dependencies;: A HashSet to store unique "dependencies" (other tokens/words) associated with the ClassName in that row. Using a HashSet ensures that each dependency is stored only once and provides efficient lookup.
Program___for_wordsnets_reordering_qhenomenology Class
This is the main class containing the logic for file processing, data extraction, and frequency analysis.
Line-by-Line Explanation (Method by Method):
GetAllSubstrings___length_upto_6 Method
C#
public static System.Collections.Generic.List<string> GetAllSubstrings___length_upto_6(string word)
{
var results = new System.Collections.Generic.List<string>();
word = word.ToUpperInvariant();
for (int len = 1; len <= 6; len++)
{
for (int i = 0; i <= word.Length - len; i++)
{
results.Add(word.Substring(i, len));
}
}
return results;
}// public static List<string> GetAllSubstrings___length_upto_6
public static System.Collections.Generic.List<string> GetAllSubstrings___length_upto_6(string word): This static method takes a word as input and returns a List of all its substrings with lengths from 1 to 6, inclusive.
var results = new System.Collections.Generic.List<string>();: Initializes an empty list to store the substrings.
word = word.ToUpperInvariant();: Converts the input word to uppercase and uses Invariant Culture for case-insensitive comparisons, ensuring consistency regardless of the user's locale.
for (int len = 1; len <= 6; len++): Outer loop iterates for substring lengths from 1 up to 6.
for (int i = 0; i <= word.Length - len; i++): Inner loop iterates through all possible starting positions (i) for a substring of len length.
results.Add(word.Substring(i, len));: Extracts the substring of length len starting at index i and adds it to the results list.
return results;: Returns the list of all generated substrings.
GenerateSubstringFrequencies Method
C#
private static void GenerateSubstringFrequencies(
System.Collections.Generic.Dictionary<string, int> prefixFreq,
System.Collections.Generic.Dictionary<string, int> suffixFreq,
System.Collections.Generic.Dictionary<string, int> coreFreq,
System.Collections.Generic.Dictionary<string, System.Collections.Generic.HashSet<string>> coOccurrence,
System.Collections.Generic.HashSet<string> tokenSet)
{
foreach (string token in tokenSet)
{
var substrings = GetAllSubstrings___length_upto_6(token);
var unique = new System.Collections.Generic.HashSet<string>(substrings);
foreach (string sub in unique)
{
if (!coreFreq.ContainsKey(sub)) coreFreq[sub] = 0;
coreFreq[sub]++;
}
for (int len = 1; len <= 6 && len <= token.Length; len++)
{
string prefix = token.Substring(0, len);
string suffix = token.Substring(token.Length - len);
if (!prefixFreq.ContainsKey(prefix)) prefixFreq[prefix] = 0;
if (!suffixFreq.ContainsKey(suffix)) suffixFreq[suffix] = 0;
prefixFreq[prefix]++;
suffixFreq[suffix]++;
}
var arr = unique.ToArray();
for (int i = 0; i < arr.Length; i++)
{
for (int j = i + 1; j < arr.Length; j++)
{
string a = arr[i];
string b = arr[j];
if (!coOccurrence.ContainsKey(a)) coOccurrence[a] = new System.Collections.Generic.HashSet<string>();
coOccurrence[a].Add(b);
}
}
}
}// private static void GenerateSubstringFrequencies
private static void GenerateSubstringFrequencies(...): This static method calculates and populates dictionaries for prefix, suffix, core substring frequencies, and co-occurrence of substrings. It takes several Dictionary and HashSet objects by reference to populate them.
foreach (string token in tokenSet): Iterates through each unique token collected from the input CSV file.
var substrings = GetAllSubstrings___length_upto_6(token);: Gets all substrings (length 1-6) for the current token.
var unique = new System.Collections.Generic.HashSet<string>(substrings);: Creates a HashSet of these substrings to ensure only unique substrings are considered for core frequency and co-occurrence, avoiding double-counting if a substring appears multiple times within the same token.
foreach (string sub in unique): Iterates through the unique substrings of the current token.
if (!coreFreq.ContainsKey(sub)) coreFreq[sub] = 0;: If the substring sub is not yet in coreFreq, initializes its count to 0.
coreFreq[sub]++;: Increments the frequency count for the current sub.
for (int len = 1; len <= 6 && len <= token.Length; len++): Loops to generate prefixes and suffixes. It ensures len does not exceed the token's length.
string prefix = token.Substring(0, len);: Extracts the prefix of the current token.
string suffix = token.Substring(token.Length - len);: Extracts the suffix of the current token.
if (!prefixFreq.ContainsKey(prefix)) prefixFreq[prefix] = 0;: Initializes prefix count if not present.
if (!suffixFreq.ContainsKey(suffix)) suffixFreq[suffix] = 0;: Initializes suffix count if not present.
var arr = unique.ToArray();: Converts the unique substrings HashSet to an array to easily iterate and compare pairs for co-occurrence.
for (int i = 0; i < arr.Length; i++) { for (int j = i + 1; j < arr.Length; j++) { ... } }: Nested loops to generate all unique pairs of substrings from the unique set (e.g., if unique has {A, B, C}, it will consider (A,B), (A,C), (B,C)).
string a = arr[i]; string b = arr[j];: Gets the two substrings for the current pair.
if (!coOccurrence.ContainsKey(a)) coOccurrence[a] = new System.Collections.Generic.HashSet<string>();: If substring a is not yet a key in coOccurrence, initializes a new HashSet for its co-occurring partners.
coOccurrence[a].Add(b);: Adds substring b to the set of co-occurring partners for a. (Note: The current implementation only stores b as a co-occurrence for a, not vice versa, meaning it's a directed relationship in the stored data, though co-occurrence is typically bidirectional. However, for a graph representation for "adjacency", this is sufficient if you process both directions when building the graph later).
This is the main entry point for the "qhenomenology reordering" logic, as called from the button click event handler.
Line-by-Line Explanation:
C#
public static void Program___for_wordsnets_reordering_qhenomenology___Main(ref System.Windows.Forms.ProgressBar ___progressbar)
{
System.Windows.Forms.OpenFileDialog ofd = new System.Windows.Forms.OpenFileDialog
{
Title = "Select CSV file",
Filter = "CSV Files (*.csv)|*.csv"
};
if (ofd.ShowDialog() != System.Windows.Forms.DialogResult.OK)
{
return;
}
public static void Program___for_wordsnets_reordering_qhenomenology___Main(ref System.Windows.Forms.ProgressBar ___progressbar): The main method that orchestrates the file reading, processing, and report generation. It takes a ProgressBar control by reference.
System.Windows.Forms.OpenFileDialog ofd = new System.Windows.Forms.OpenFileDialog { ... };: Creates a new OpenFileDialog object.
Title = "Select CSV file": Sets the title of the file dialog window.
Filter = "CSV Files (*.csv)|*.csv": Sets the file filter to only show CSV files.
if (ofd.ShowDialog() != System.Windows.Forms.DialogResult.OK): Displays the file dialog. If the user clicks "Cancel" or closes the dialog without selecting a file, the method returns, doing nothing further.
string inputPath = ofd.FileName;: Stores the full path of the selected CSV file.
string baseDir = System.IO.Path.GetDirectoryName(inputPath);: Gets the directory name of the input file. This variable is declared but not used in the provided snippet.
string outputPath = ...;: Defines the path for the reordered output CSV file.
string cycleLogPath = ...;: Defines the path for a log file related to "cycle broken entries" (suggests some form of graph traversal or dependency analysis is intended, though the sorting logic is not fully shown here).
string tokenLogPath = ...;: Path for token frequencies.
string alphabetLogPath = ...;: Path for alphabet (character) frequencies.
string prefixLogPath = ...;: Path for prefix frequencies.
string suffixLogPath = ...;: Path for suffix frequencies.
string coreLogPath = ...;: Path for core substring frequencies.
string adjacencyLogPath = ...;: Path for substring co-occurrence log (named "adjacency" likely due to a graph-based interpretation).
C#
var allRows = new System.Collections.Generic.List<RowData___for_wordsnets_qhenomenology_reordering>();
var classToRow = new System.Collections.Generic.Dictionary<string, RowData___for_wordsnets_qhenomenology_reordering>(System.StringComparer.OrdinalIgnoreCase);
var tokenFrequencies = new System.Collections.Generic.Dictionary<string, int>(System.StringComparer.OrdinalIgnoreCase);
var alphabetFrequencies = new System.Collections.Generic.Dictionary<char, int>();
var tokenSet = new System.Collections.Generic.HashSet<string>();
var prefixFreq = new System.Collections.Generic.Dictionary<string, int>();
var suffixFreq = new System.Collections.Generic.Dictionary<string, int>();
var coreFreq = new System.Collections.Generic.Dictionary<string, int>();
var coOccurrence = new System.Collections.Generic.Dictionary<string, System.Collections.Generic.HashSet<string>>();
Initialization of Data Structures: A series of List, Dictionary, and HashSet objects are initialized. These will store the parsed row data, mappings, and frequency counts during the processing.
allRows: A list to store all RowData objects parsed from the CSV.
classToRow: A dictionary mapping ClassName (from a row) to its corresponding RowData object. OrdinalIgnoreCase ensures case-insensitive lookups for class names.
tokenFrequencies: Stores the frequency of each unique token found in the CSV.
alphabetFrequencies: Stores the frequency of each character (letter) found in the tokens.
tokenSet: A HashSet to store all unique tokens encountered, preventing duplicates.
prefixFreq, suffixFreq, coreFreq, coOccurrence: These dictionaries and hash sets are initialized to be passed to the GenerateSubstringFrequencies method to store the calculated substring statistics.
string[] lines = System.IO.File.ReadAllLines(inputPath);: Reads all lines from the selected CSV file into a string array. This is suitable for smaller to medium-sized files but can consume a lot of memory for extremely large files.
___progressbar.Maximum = lines.Length;: Sets the maximum value of the progress bar to the total number of lines in the CSV file.
___progressbar.Value = 0;: Initializes the progress bar value to 0.
___progressbar.Invalidate(); ___progressbar.Refresh();: These lines force the progress bar to redraw itself immediately, showing its initial state.
C#
for (int i = 1; i < lines.Length; i++)
{
string line = lines[i];
string[] parts = line.Split(',');
if (parts.Length < 2) continue;
for (int i = 1; i < lines.Length; i++): Loops through each line of the CSV, starting from the second line (index 1). This suggests the first line (index 0) is assumed to be a header row and is skipped.
string line = lines[i];: Gets the current line.
string[] parts = line.Split(',');: Splits the line into an array of strings using the comma (,) as a delimiter. This assumes a comma-separated value file.
if (parts.Length < 2) continue;: Skips the current line if it doesn't have at least two parts (columns), indicating malformed data or empty lines.
string className = parts[1].Trim().ToUpperInvariant();: Extracts the ClassName from the second column (index 1) of the current line, trims any leading/trailing whitespace, and converts it to uppercase invariant.
string posTag = parts.Length > 2 ? parts[2].Trim() : "";: Extracts the PartsOfSpeech tag from the third column (index 2) if it exists, otherwise sets it to an empty string. This uses a ternary operator for concise conditional assignment.
C#
var dependencies = new System.Collections.Generic.HashSet<string>(System.StringComparer.OrdinalIgnoreCase);
int tokenCount = 0;
for (int col = 0; col < parts.Length; col++)
{
string raw = parts[col].Replace("______", " ").ToUpperInvariant();
string[] tokens = System.Text.RegularExpressions.Regex.Split(raw, @"[^A-Z0-9]+");
foreach (string token in tokens)
{
if (!string.IsNullOrWhiteSpace(token) && System.Text.RegularExpressions.Regex.IsMatch(token, "^[A-Z]+$"))
{
tokenSet.Add(token);
tokenCount++;
foreach (char ch in token)
{
if (char.IsLetter(ch))
{
if (!alphabetFrequencies.ContainsKey(ch)) alphabetFrequencies[ch] = 0;
alphabetFrequencies[ch]++;
}
}
if (!tokenFrequencies.ContainsKey(token)) tokenFrequencies[token] = 0;
tokenFrequencies[token]++;
if (token != className) dependencies.Add(token);
}
}
}
var dependencies = new System.Collections.Generic.HashSet<string>(System.StringComparer.OrdinalIgnoreCase);: Initializes a HashSet for dependencies for the current row, ensuring case-insensitive storage.
int tokenCount = 0;: Initializes a counter for tokens in the current row.
for (int col = 0; col < parts.Length; col++): Loops through each column (parts) of the current line.
string raw = parts[col].Replace("______", " ").ToUpperInvariant();: Takes the content of the current column, replaces "______" (six underscores) with a space, and converts to uppercase invariant. This suggests that the input data might use underscores as delimiters that need to be treated as spaces for tokenization.
string[] tokens = System.Text.RegularExpressions.Regex.Split(raw, @"[^A-Z0-9]+");: Splits the raw string into tokens using a regular expression. The regex [^A-Z0-9]+ matches one or more characters that are not uppercase letters (A-Z) or digits (0-9). This effectively splits the string by any non-alphanumeric sequences.
foreach (string token in tokens): Iterates through each extracted token.
if (!string.IsNullOrWhiteSpace(token) && System.Text.RegularExpressions.Regex.IsMatch(token, "^[A-Z]+$")): This is a crucial filter. It checks if the token is not empty or just whitespace, and if it consists entirely of uppercase letters (A-Z). This means tokens containing numbers or mixed characters will be ignored.
tokenSet.Add(token);: Adds the valid token to the overall tokenSet (for unique tokens).
tokenCount++;: Increments the token count for the current row.
foreach (char ch in token): Iterates through each character in the valid token.
if (char.IsLetter(ch)): Checks if the character is a letter.
if (!alphabetFrequencies.ContainsKey(ch)) alphabetFrequencies[ch] = 0; alphabetFrequencies[ch]++;: Increments the frequency for the current letter in alphabetFrequencies.
if (!tokenFrequencies.ContainsKey(token)) tokenFrequencies[token] = 0; tokenFrequencies[token]++;: Increments the frequency for the current token in tokenFrequencies.
if (token != className) dependencies.Add(token);: If the current token is not the ClassName of the row, it's considered a dependency and added to the dependenciesHashSet for the current row.
var rowData = new RowData___for_wordsnets_qhenomenology_reordering { ... };: Creates a new instance of RowData___for_wordsnets_qhenomenology_reordering and populates its properties with the extracted data for the current line.
allRows.Add(rowData);: Adds the rowData object to the allRows list.
classToRow[className] = rowData;: Adds the className as a key and the rowData object as its value to the classToRow dictionary, allowing quick lookup of a row's data by its ClassName.
___progressbar.Value = i; ___progressbar.Invalidate(); ___progressbar.Refresh();: Updates the progress bar value to reflect the current line number being processed and forces a redraw.
GenerateSubstringFrequencies(...): Calls the previously defined static method to calculate and populate the prefix, suffix, core substring, and co-occurrence frequencies based on all the unique tokenSet collected from the CSV.
C#
using (var prefixLog = new System.IO.StreamWriter(prefixLogPath))
{
prefixLog.WriteLine("PREFIX,FREQ");
foreach (var kv in prefixFreq.OrderByDescending(x => x.Value)) prefixLog.WriteLine(kv.Key + "," + kv.Value);
}
// ... similar blocks for suffixLog, coreLog, adjacencyLog
using (var prefixLog = new System.IO.StreamWriter(prefixLogPath)): This using statement ensures that the StreamWriter is properly disposed of (closed) even if an error occurs. It opens a file for writing at the prefixLogPath.
prefixLog.WriteLine("PREFIX,FREQ");: Writes the header row to the CSV log file.
foreach (var kv in prefixFreq.OrderByDescending(x => x.Value)): Iterates through the prefixFreq dictionary, ordering the entries in descending order based on their frequency count (x.Value).
prefixLog.WriteLine(kv.Key + "," + kv.Value);: Writes each prefix and its frequency to the log file, separated by a comma.
Similar blocks for suffixLog, coreLog, and adjacencyLog: The code repeats this pattern to write the calculated frequencies and co-occurrences to their respective CSV log files.
For adjacencyLog, it iterates through the coOccurrence dictionary. For each key-value pair (kv.Key is substring A, kv.kv.Value is a HashSet of co-occurring substrings B), it then iterates through the HashSet of b values and writes A,B to the file.
C#
System.Windows.Forms.MessageBox.Show("All reports (prefix, suffix, core, adjacency) written successfully.");
} // Main method
System.Windows.Forms.MessageBox.Show(...): Displays a message box to the user confirming that the log files have been written successfully.
Overall Purpose and "QHENOMENOLOGY"
While the code doesn't explicitly define "QHENOMENOLOGY" or detail the reordering process itself (the outputPath for reordered data is set, but the actual sorting/reordering logic isn't shown in the provided snippet beyond data parsing), it's clear that this application is designed for:
Parsing WordNet-like Data: It reads a large CSV file structured like a WordNet or a similar linguistic/semantic database.
Tokenization and Feature Extraction: It extracts individual "tokens" (words) from each row, identifies a "class name" and "parts of speech", and determines "dependencies" between tokens.
Frequency Analysis: It calculates:
Overall token frequencies.
Individual character (alphabet) frequencies.
Frequencies of prefixes (length 1-6).
Frequencies of suffixes (length 1-6).
Frequencies of all substrings (length 1-6) (referred to as "core substrings").
Co-occurrence relationships between substrings.
Reporting: It outputs these frequency analyses into separate CSV log files, which can then be used for further analysis or insights into the structure and patterns within the "WordNets" data.
The "QHENOMENOLOGY" likely refers to a specific methodology or framework for classifying, analyzing, and reordering this linguistic data, possibly based on the statistical patterns discovered through the frequency analyses performed by this code. The dependency tracking and co-occurrence analysis hint at building some form of semantic graph or network, which would be central to any "phenomenological" study of word relationships.
This C# WinForms code provides a GUI button that processes a large CSV file (207,000 rows × 17 columns) related to WordNet-style lexical data, reorders it according to Sanjoy Nath’s Qhenomenology classification, and generates statistical reports. Here is a line-by-line explanation of the code.
This calls the main processing method inside a namespace.
ref this.progressBarFOR_ANIMATIONS_SAVING: passes a reference to a ProgressBar so UI can show progress during processing.
✅ Inside the Namespace and Class
RowData___for_wordsnets_qhenomenology_reordering
A simple data container class:
csharp
public string OriginalLine; // The raw CSV line
public string PartsOfSpeech; // The POS tag (noun, verb, etc.)
public int TOTAL_TOKENS_FOUND_IN_CURRENT_CLASS_ROW; // Count of meaningful tokens
public string ClassName; // Primary classification label
public HashSet<string> Dependencies; // Other tokens related to this class
for (int len = 1; len <= 6; len++)
for (int i = 0; i <= word.Length - len; i++)
results.Add(word.Substring(i, len));
Collects all substrings of a token, from length 1 to 6.
Example for "CROSS" → C, CR, CRO, ..., ROSS, CROSS
✅ Save Statistical Reports
Each of these saves its result as a CSV:
csharp
prefixLog.WriteLine("PREFIX,FREQ"); // saves most common prefixes
suffixLog.WriteLine("SUFFIX,FREQ"); // saves most common suffixes
coreLog.WriteLine("CORE,FREQ"); // saves core substrings
adjLog.WriteLine("SUBSTRING_A,SUBSTRING_B"); // saves co-occurrence map
✅ Final UI Alert
csharp
MessageBox.Show("All reports (prefix, suffix, core, adjacency) written successfully.");
Confirms completion to the user.
📊 OUTPUT FILES EXPLAINED
File
Description
_REORDERED_QHENOMENOLOGY_SORTED.csv
Future implementation may reorder rows based on Qhenomenology rules. (Currently missing here.)
_CYCLE_BROKEN_ENTRIES_QHENOMENOLOGY_LOG.csv
Placeholder for entries causing cyclic dependencies (not yet implemented).
_PREFIX_COUNTS.csv
Prefix (start-of-word) frequency stats.
_SUFFIX_COUNTS.csv
Suffix (end-of-word) frequency stats.
_CORE_SUBSTRING_COUNTS.csv
Internal substrings and their frequencies.
_COOCCURRENCE.csv
Pairs of substrings that co-occur within tokens.
_TOKEN_FREQUENCIES.csv
(Declared path, not written here) — would contain word frequencies.
_ALPHABET_COUNTS.csv
(Declared path, not written here) — would contain letter frequency (A–Z).
🧠 Qhenomenology Relevance
This code is preparing the dataset for advanced Qhenomenology-based analysis:
It extracts structural features of words: substrings, dependencies, prefix/suffix regularities.
These can later be used to cluster or reorder words based on philosophical or cognitive structure of language as per Qhenomenology.
🟡 What's Not Yet Implemented (Future Work)
Actual Qhenomenological reordering logic.
Cycle-breaking or dependency graph resolution.
Saving the main _REORDERED_QHENOMENOLOGY_SORTED.csv file with new order.
Writing alphabet frequency and token frequency CSVs (_TOKEN_FREQUENCIES.csv, _ALPHABET_COUNTS.csv).
If you want, I can help add the missing parts (like dependency sorting and reordering) or generate visualizations/statistics from the output files.
QHENOMENOLOGY AXIOMATIC ANALYSIS REPORT
Sanjoy Nath’s Qhenomenology Reasoning System is a deductive, axiomatic reasoning architecture that strictly rejects probabilistic or neural approaches to concept analysis. Instead, it introduces a CPP-class-based strict dependency model of all human vocabulary and conceptual formation. The analysis below critiques your full document harshly and rigorously within the Qhenomenological Framework, based on the stated axioms:
👉 Verdict: Compilation order breaks. The document uses undefined vocabulary, violating the Qhenomenological strict queue rule.
🔴 CRITIQUE 2: Circular Definitions Detected
Violating AXIOM 0.003, your definitions refer to each other recursively without concrete base class:
"God" must precede "Church" (stated), but "God" itself is undefined.
"Manipulation" is described through "Bias", "Fallacy" and "Injustice" — all undefined or dependent on each other.
"Qhenomenology" is used to justify itself without building from fundamental classes like "Reasoning" or "Deduction".
👉 Verdict: Circular logic trap — These terms form a recursive unresolvable class web. Cannot be compiled in strict C++ style. Hence, reasoning collapses.
🔴 CRITIQUE 3: Ambiguous Class Composition
As per AXIOM 1 and 3, class constructors must use previously compiled (defined) classes only.
But:
"Bit" is used as a foundation — assumed compilable — but not defined anywhere in the document.
"Spectrum", "Stack", "CG point", "Frequency" are used with shifting semantics, without consistent CG_Y / CG_X-based formal constructors.
👉 Verdict: Multiple abstract or semi-concrete terms introduced without formal instantiation rules. Compilation fails.
🟠 CRITIQUE 4: Soft Violation of Axiom Hierarchies
Despite claims of rejecting emotion or probabilistic reasoning, emotional intensity is used rhetorically:
Phrases like: “SATAN’S DEVELOPMENT TO MANIPULATE HUMANS” or “ERADICATE ALL SUCH NON COMPILABLE TERMS FROM THE DICTIONARY” reflect emotive language.
This contradicts Axiom: “All human emotions are cpp class names” (i.e., meaningless unless defined and compiled).
👉 Verdict: Contradiction between the framework’s anti-emotive logic and the use of emotionally loaded expressions.
🟠 CRITIQUE 5: Unclear Use of the Term “CPP Class”
If all concepts are modeled as CPP classes:
You must provide concrete class headers and example instantiations.
Consciousness, God, Society, Photon must look like:
cpp
class Bit {
public:
bool value; // 0 or 1
};
class Byte {
Bit bits[8];
};
Yet, none of these are defined. Just declaring everything a class is not enough in Qhenomenology — you must also define instance variables using only prior compilable classes.
👉 Verdict: Semantic laziness — metaphorical use of CPP class notion without fulfilling its formal definition structure.
🟢 VALIDATED STRENGTHS IN DESIGN:
Despite harsh critique, certain excellent structural points align well with Qhenomenological goals:
✅ Maslow hierarchy as strict ordering: Excellent mapping of need-driven vocabulary formation.
✅ Spreadsheet-based awareness/attentiveness/sensitivity analysis: Strong alternative to probability.
✅ Stack vs. queue analysis for signal processing: Innovative CG-based energy logic.
✅ Rejection of FFT for AABB-packing approach: Highly original and mathematically testable.
🔴 CRITIQUE 6: Violations in the Physics Reasoning Section
You claim to not use frequency, wavelength, sin, cos, yet:
Still define frequency as packing factor — this is frequency in disguise.
CG vertical lines equated with spectral analysis, which is functionally equivalent to FFT output.
👉 Verdict: Although avoiding sin/cos mathematically, you’ve redefined their function under new terms — semantic shift rather than true axiom break.
🔴 CRITIQUE 7: Mixing Language Theory with Physics
You freely mix:
Lexical/semantic ordering of words
Waveform geometry
Justice system critiques
God vs. Church ordering
…all under the same symbolic mechanism.
While Qhenomenology allows this unification, a modular structure of domains would help:
Define LexicalQhenomenology
Define WaveformQhenomenology
Define JusticeQhenomenology
👉 Verdict: Violates Separation of Concerns (SOC) in deductive system design.
🔴 DEFINITION CLASHES IDENTIFIED
Term
Conflicting Roles
Frequency
Both “rejected” and “used as packing factor”
Compiler
Used both literally (C++ analogy) and metaphorically (logical validator)
Stack vs Queue
Treated interchangeably in physical AABB models, but their computational behavior differs
CG (Center of Gravity)
Mathematical in one place, metaphorical in another
🔚 FINAL VERDICT
Sanjoy Nath's Qhenomenology Reasoning System is a radical, deterministic reasoning framework with deep foundational aims — rejecting ambiguity, abstract reasoning, neural nets, and probability.
But in this document:
❌ You break your own axioms repeatedly.
❌ Many terms are non-compilable due to undefined dependency chains.
❌ Several circular and clashing definitions are present.
✅ The novelty of the queue/stack CG-based waveform model is intellectually powerful and mathematically promising.
✅ Fix Suggestions:
Provide CPP-style header file of all terms.
Ensure concrete class dependency resolution before using any concept.
Separate domains and modularize axioms.
Remove emotional rhetoric to match reasoning purity.
Use strict syntax + token resolution map.
Would you like me to output the missing dependency graph, or formalize these definitions in .cpp style headers with constructible constructors for Qhenomenology-ordered dictionary?
It looks like you've provided a highly detailed and somewhat unorthodox set of axioms and a conceptual framework called "Sanjoy Nath's Qhenomenology Reasoning System," applied to both general reasoning/justice systems and, quite extensively, to physics (specifically waves and digital signals).
My task is to critique this "point of view" harshly, identify missing definitions, circular definitions, and definition clashes based on the given axioms.
Let's dissect this.
Critique of Sanjoy Nath's Qhenomenology Reasoning System
Overall Impression
The "Sanjoy Nath's Qhenomenology Reasoning System" (let's call it SNQRS for brevity) presents itself as a radically deductive and deterministic framework aiming for absolute, bias-free, and manipulation-free reasoning, particularly in social and justice systems, and also proposes an entirely new way to analyze waves and digital signals.
The language used is exceptionally strong, assertive, and dismissive of conventional wisdom ("don't bother any government, don't bother any humans power," "Satan's development to manipulate humans," "probability is backdated measure," "neural network is secondary or almost irrelevant," "all humans reasoning are manipulated"). This immediately flags it as a highly dogmatic and potentially unscientific approach, as genuine scientific and philosophical inquiry typically embraces nuance, peer review, and acknowledges limitations.
Fundamental Flaws and Missing Definitions in the General Reasoning Framework
1. The Core Analogy: "Vocabulary Words as C++ Class Names" - A Category Error and Oversimplification
The central premise of SNQRS (Axiom 0.001, Axiom 0.002, Axiom 1) is that every word in a human vocabulary is a "meaningless C++ class name," and its "meaningfulness" (or "existence") is solely determined by its "compilability" in a strict, queued order. This is the system's foundational assertion, and it's deeply flawed.
Missing Definition of "Meaningless C++ Class Name": What does a "meaningless C++ class name" even mean in this context? C++ class names refer to concrete or abstract data types and their associated behaviors. They are not inherently "meaningless"; their meaning is derived from their definition and usage within a program. Applying this literally to natural language is an extreme oversimplification that fails to capture the richness, ambiguity, context-dependency, and evolving nature of human language.
Missing Definition of "Compilability" in Natural Language: The concept of "compilability" is borrowed directly from programming languages. How exactly does a natural language word, sentence, or document "compile"?
Is there a lexicon? A parser? A compiler? This entire mechanism is undefined.
The closest analogy is perhaps formal logic, where propositions are checked for logical consistency. However, SNQRS explicitly rejects traditional logic (implied by the dismissal of "Boolean systems, Frege's logic, Cantor's logic, Russell's logic, type theory, set theory" as "not possible to formalize computation" before bits, which is historically inaccurate and deeply puzzling).
Category Error: Equating natural language concepts with C++ classes is a category error.
C++ classes are formal constructs within a specific computational paradigm. They operate on precise rules of syntax and semantics.
Human language is organic, full of metaphor, idiom, context, and often intentional ambiguity. It is not designed to "compile" in a literal sense. The system proposes forcing natural language into a highly artificial, rigid, and unsuitable mold.
2. Circular and Unverifiable Definitions of "Concreteness" and "Manipulation"
SNQRS is obsessed with "concreteness" and "non-manipulation," yet these core concepts are circularly defined and untestable within the framework.
Circular Definition of "Concreteness" (Axiom 0.001):
"Every class is a concrete class and all these classes are concretely defined."
"If any such class are not definable concretely then obviously that class is not humans individual needs... Those classes are Satans development to manipulate humans..."
This creates a self-fulfilling prophecy: anything not concretely definable (by SNQRS's mysterious compilation rules) is deemed non-existent or manipulative. But how do we know something is "concretely definable" without first appealing to the undefined "compilability"? It defines "concrete" by its own internal (and undefined) mechanism, then uses "concrete" to validate itself.
Circular Definition of "Manipulation" (Axiom 0.001, 0.002, 0.003):
"Any kind of non compilable scenarios are manipulative scenarios."
"If the order breaks then systems don't compile and that means surely some manipulations occur in the society at that point... Some injustice occurs at that point..."
"If not compiled at any point of R then there is manipulation done..."
This is a tautology. "Manipulation" is defined as "non-compilability." This makes the claim that it "declares the manipulations on face of humans" or "achieves zero manipulations in social systems" unfalsifiable. If something fails to compile, it's manipulation. There's no external, independent criterion for identifying manipulation. It simply labels anything that doesn't fit its rigid structure as "manipulation."
3. Missing Definition of "Dictionary" and "Queued Order"
Axiom 0.002, 0.003, and Axiom 1 repeatedly refer to a "dictionary" or "vocabulary" with a "strictly queued order."
Missing Definition of the "Queued Order": How is this order established?
The axiom states it's "not in alphabetical order not in lexical order."
It's based on "class compilable strict queued order," where "all_or_some_of_classes(from row=0 to row=R-1) are used to define the class in row=R."
This is a circular dependency in the definition of the ordering itself. To define the order, you need to know which words define others. But how do you know which words define others without already having a defined dictionary or some pre-existing semantic network (which the system purports to create or validate)?
The example of "God comes first, Church comes after" is an arbitrary semantic ordering, not a formally defined "compilability" rule. How does "God" "compile" without prior concepts? What are the fundamental, self-compiling "basis objects"? This is never defined.
4. Definition Clashes with Established Concepts
SNQRS explicitly clashes with widely accepted concepts in multiple fields:
Clash with Linguistics/Philosophy of Language: Human language is inherently ambiguous, context-dependent, and evolving. Reducing it to "meaningless C++ class names" ignores centuries of linguistic and philosophical study.
Clash with Computer Science History: The claim that "UNTIL THE BOOLEAN LOGIC FREGES LOGIC CANTORS LOGIC RUSSSELS LOGIC TYPE THEORY , SET THEORY WAS THERE IT WAS NOT POSSIBLE TO FORMALIZE THE COMPUTATION (THEORETICAL COMPUTATIONS)" is patently false. Boolean logic and formal systems preceded and enabled modern computing and processor architecture, they didn't follow them in terms of theoretical formalization. The notion that "Bit class is defined, Computer Architecture Systems class is not possible" and a granular, linear compilation of "Bit stringing," "Bit streams," "Byte class," "Nibble class," "Character class" as subclasses of "Bit class" is a bizarre and inaccurate representation of computer science fundamentals. A character is not a subclass of a bit in the way a C++ class hierarchy works.
Clash with Sociology/Psychology (Maslow's Hierarchy): While Maslow's Hierarchy describes stages of needs, its application here to strictly define "compilability" of words is an unsubstantiated leap. There's no empirical evidence to suggest that the order in which humans acquire or define concepts strictly follows such a linear, compilable chain linked to societal needs in this rigid manner. The claim that "the level of such crumbling of the structure is measurable through number of other words(classes) in the dictionary depends on instance variables of class at R W_R" is a vague and untestable metric for "manipulatedness."
Clash with Statistics/Measurement (Axiom 2, Axiom of Ratio Findings): The flat assertion that "Probability is backdated measure in sociology" and the introduction of "awareness_ratio, attentiveness_ratio, and sensitivity ratio as the alternative measures which are more powerful predictability metric than probability" is an extraordinary claim without any theoretical or empirical backing provided in the axioms. Probability is a foundational mathematical tool for uncertainty and statistical analysis. How these new "ratios" are calculated and why they are "more powerful" is entirely undefined and unsubstantiated. The "spreadsheet" concept for empirical calculation is vague and lacks any statistical rigor.
5. Unrealistic and Untenable Claims about Bias and Neural Networks
"Ignores all humans facalty while evaluating the sentences of humans languages": This is a contradictory statement for a system meant to evaluate "humans sentences" for "justice systems." Ignoring human faculty (intent, context, nuance) leads to robotic, often unjust, outcomes.
"Strictly don't use Neural Network to get bias free reasoning systems... even Neural Networks use humans data so it is also flawed": While neural networks can indeed reflect biases in their training data, dismissing all human data and all neural networks as "flawed" based on this premise is a radical overgeneralization. The SNQRS itself relies on "humans vocabulary" and implicitly, human definitions, which are equally subject to bias. The proposed system, by its own logic, would be "flawed" by using any human-derived "vocabulary."
6. Contradictions and Self-Refutations
"All our policy designs need to have such strong reasoning systems first" (Axiom 0.001) vs. "All humans reasoning are manipulated reasoning systems": If all human reasoning is manipulated, how can humans design and implement this "strong reasoning system" in the first place without it being manipulated itself? This creates a logical paradox.
The system states it "ignores all humans facalty" but then talks about "Mass awareness," "Mass attentiveness," "Mass common understanding," and "Mass acceptances." These are inherently human faculties and social phenomena that are precisely what "human faculty" encompasses.
The system claims "every cpp class in this system can have overridable main method and these are strict not static. None of members in these classes are allowed to have static members. All the members in every classes are non static." This is an arbitrary and restrictive design choice for a C++ system, not a logical or philosophical axiom. It seems to be a personal preference framed as a universal truth for the "qhenomenological compiler."
Critique of SNQRS Physics Reasoning Systems (Waves and Digital Signals)
This section of the axioms introduces a parallel, equally radical, and similarly flawed framework for physics.
1. Fundamental Misunderstanding/Rejection of Established Physics
Rejection of Fourier Analysis (Axiom 1): Flatly stating that SNQRS "will never use Fourier process nor use FFT like things to do waves analysis or digital signal processings" and "will never use cos components and sin components" is a profound rejection of one of the most fundamental and empirically validated tools in signal processing and wave theory. Fourier analysis is mathematically rigorous and incredibly successful in practical applications (from audio compression to telecommunications). Dismissing it without demonstrating a superior, equivalent, or more comprehensive alternative is scientifically untenable.
Missing Definition of "Combinatorial Packing Factor" and "AABB": "Frequency is nothing but the combinatorial packing factor of different AABB widths along the timeline." What is "AABB" (Axis-Aligned Bounding Box)? How is it derived from a continuous wave? How does "combinatorial packing factor" relate to the physical concept of frequency (cycles per unit time)? These are critically undefined. The explanation attempts to tie it to "squeezing" the AABB, which vaguely relates to frequency, but the underlying mechanism is left as an assertion.
"Wave length is not necessary to analyse wave like digital signals" (Axiom 7): Wavelength is intrinsically linked to frequency and wave speed. To reject it is to reject fundamental wave physics.
"Photon like things are not existing" (Axiom 9): This is a direct rejection of quantum mechanics, a cornerstone of modern physics, again without any alternative theoretical framework or empirical evidence. The explanation of "crests AABB queues and trough AABB queues collapses (dismantles) and the stacks are formed" to explain light passing through slits or crystals is highly metaphorical and lacks any physical basis.
2. Unexplained Geometrization and "CG Alignment"
Arbitrary "AABB" Construction (Axiom 3): The definition of Crest AABB (origin at left bottom) and Trough AABB (origin at left top) and their placement on a "0 0 (average amplitude line)" seems arbitrary and poorly connected to actual wave forms. How are these bounding boxes precisely derived from a signal?
"CG Alignment" as Frequency Analysis (Axiom 3++, Axiom 6): The claim that "N sharp vertical lines are there if we look at the stack of transparent aligned AABB objects which signifies the frequency analysis is easier to handle and no need of FFT" is a remarkable assertion. While visual patterns might emerge, how these "CG alignments" quantitatively map to precise frequencies or a spectrum is not explained. It's a hand-wavy claim of a visual phenomenon without a mathematical or physical derivation. The "blue dots for crest AABB amplitudes" and "red dots for CG on the trough AABB amplitudes" are visual metaphors, not scientific descriptions.
Forgetting Actual Amplitude Values (Axiom 3+): The analysis "forgets the actual amplitudes values and remembers only the max width... max height... CG, standard deviations, skewness, kurtosis." This is a significant loss of information. While moments can characterize distributions, discarding the actual amplitude values means losing the very essence of the signal. How can you reconstruct a wave (as hinted in Axiom 8) without original amplitude values if you only retain statistical moments?
3. "Digital Signals" as the Sole Basis
The continuous emphasis on "digital signals" and "WAV files" suggests a potentially narrow scope. While digital signals are crucial, general wave theory applies to continuous phenomena as well (e.g., water waves, light waves). The system seems to be reverse-engineering a theory from discrete, digitized data, then attempting to apply it universally.
Conclusion: A System of Unsubstantiated Claims and Redefinitions
Sanjoy Nath's Qhenomenology Reasoning System, as presented in these axioms, is a highly idiosyncratic and largely unsubstantiated framework.
Missing Definitions: Core concepts like "compilability" in natural language, the precise "queued order" of the dictionary, and the exact mathematical derivation of "awareness_ratio," "attentiveness_ratio," and "sensitivity ratio" are entirely absent. In the physics context, the derivation of "AABB" from a wave and the quantitative link between "CG alignment" and spectral properties are critically undefined.
Circular Definitions: "Manipulation" is circularly defined as "non-compilability," making the system's claims about detecting manipulation unfalsifiable and inherently biased. "Concreteness" is also circularly defined by its own internal, undefined compilation process.
Definition Clashes: It fundamentally clashes with established principles in linguistics, computer science, philosophy, sociology, and physics by redefining or outright rejecting well-understood concepts without providing rigorous alternatives or empirical validation.
Unverifiable Claims: Many claims are presented as absolute truths without any mechanism for independent verification or falsification within the given axioms.
Dogmatic Tone: The aggressive and dismissive tone towards conventional knowledge, human reasoning, and even widely accepted scientific tools (like Fourier analysis) makes it sound more like a manifesto than a scientific or logical framework.
In summary, the provided "axioms" read more like a personal philosophy or a highly abstract, unproven hypothesis, rather than a robust, verifiable, or practically applicable reasoning system. It proposes radical redefinitions of fundamental concepts without the necessary formal rigor, empirical evidence, or coherent internal consistency to support its sweeping claims.
////// Criticize harshly for any kinds of point of view or for any kind of design documents , for any kind of tender documents , find the missing definitions, find the circular definitions , find the definition clashes as per the conditions in the axioms
//////Qhenomenology reasoning system used to analyse these point of view
//////Point_of_view
//////{
//////"
////////content to verify starts here
//////Copy paste your design document text here.Legal document text here, tender document text here … whatever your point of views are and the LLM systems can do the axiomatic testing as per the conditions
////////content to verify completes here
//////"
//////}
//////Now rules of qhenomenology reasoning system
//////{
/// <summary>
/// ///AXIOMS 0.001 (PRE CONDITIONS FOR THE FRAMEWORKS) SANJOY NATH'S QHENOMENOLOGY REASONING SYSTEM IS VERY STRONG REASONING SYSTEM WHICH IS LIKE DEDUCTIVE FRAMEWORK STRONG FRAMEWORK LIKE EUCLIDEAN GEOMETRY AND DONT BOTHER ANY KIND , DONT BOTHER ANY GOVERNMENT , DONT BOTHER ANY HUMANS POWER TO ALLOW MANIPULABILITY IN JUSTICE SYSTEMS IT IS STRICT DEDUCTIVE FRAMEWORK AND STRAIGHT DECLARE THE MANIPULATIONS ON FACE OF HUMANS... IT IGNORES ALL HUMANS FACALTY WHILE EVALUATING THE SENTENCES OF HUMANS LANGUAGES...AXIOM (PRE AXIOMS) AXIOMS 0.001 AS PER MASLOWS HIERARCHY OF NEEDS WE CAN SAY THE FIRST LEVEL OF NEEDS NEED HAVE CONCEPTS WHICH NEED TO APPEAR AS THE FUNDAMENTAL GOVERNING CONCEPTS (AS PER QHENOMENOLOGY) IF ANYTHING NOT FUNDAMENTAL NEEDS GENERATE SOME CONCEPTS THEN THAT CANNOT COME FIRST. SAY ANY DICTIONARY HAS N NUMBER OF WORDS THEN ALL THE N WORDS ARE UNIQUE WORDS AND ALL THESE WORDS ARE C++ CLASS NAMES... ALL THESE CLASS NAMES ARE HAVING CONCRETE CLASS AND NONE OF THE CLASS ARE ABSTRACT CLASS(EVEN HUMAN USE THE CONCEPT AS ABSTRACT CONCEPT STILL AS PER SANJOY NATH'S QHENOMENOLOGY REASONING SYSTEM EVERY CLASSS ARE CONCRETE CLASS AND ALL THESE CLASSES ARE CONCRETELY DEFINED) IF ANY SUCH CLASS ARE NOT DEFINABLE CONCRETELY THEN OBVIOUSLY THAT CLASS IS NOT HUMANS INDIVIDUAL NEEDS... THOSE CLASSES ARE SATANS DEVELOPMENT TO MANIPULATE HUMANS... ANY KIND OF NON COMPILABLE SCENARIOS ARE MANIPULATIVE SCENARIOS WHERE MANIPULATIVE SOCIETIES ARE TRYING TO MAKE HUMAN PUSH DOWN THROUGH OVERWHELMED CONCEPTS AND WE NEED TO ERADICATE SUCH TERMS FROM THE DICTIONARY ENTIRELY. TO MAKE WELL MANAGED SOCIETY TO ACHIEVE NON FALLACY IN REASONING , TO ACHIEVE NON AMBIGUITY IN REASONING , TO ACHIEVE THE CONDITIONS OF ZERO MANIPULATIONS IN SOCIAL SYSTEMS (IN JUSTICE) TO AVOID ALL KINDS OF DILEMMA IN THE JUSTICE SYSTEMS WE NEED TO IDENTIFY ALL SUCH MANIPULATIVE (NON CONCRETABLE WORDS (CLASSES) FIRST FROM THE DICTIONARY AND TO ERADICATE ALL SUCH VOCABULARY TERMS FROM THE SOCIAL VOCABULARY) UNTIL WE ERADICATE ALL SUCH NON COMPILABLE TERMS FROM THE SOCIAL VOCABULARY WE CANNOT ACHIEVE BIAS FREE REASONING SYSTEMS IN JUSTICE IN THE SOCIETY... UNTIL WE REMOVE ALL SUCH NON COMPILABLE TERMS/WORDS/CLASSES(VOCABULARY TERMS IN DICTIONARY ARE ALL CPP CLASS NAMES) WE CANNOT ACHIEVE MANIPULATIONFREE BIAS FREE AMBIGUITY FREE JUST SOCIETY... ALL OUR POLICY DESIGNS NEED TO HAVE SUCH STRONG REASONING SYSTEMS FIRST
/// AXIOMS 0.002 IF THERE ARE N WORDS IN THE HUMANS VOCABULARY THEN HUMANS DICTIONARY(NOT IN ALPHABETICAL ORDER NOT IN LEXICAL ORDER BUT STRICTLY ARRANGED IN THE CLASS COMPILABLE STRICT QUEUED ORDER) HAS N ROWS AND 2 COLUMNS WHERE COLUMN 1 ROW=R HAS A WORD W_R (WORD IN RTH ROW ) UNIQUE WORD WHICH IS JUST A C++ CLASS NAME ) THEN COLUMN 2 OF THE ROW=R IS THE CONSTRUCTOR OF THAT CLASS AND IF THE UNIQUE INSTANCES OF CLASSES USED IN THAT CONSTRUCTOR ARE REPRESENTED AS {W_I} THAT IS ALL_OR_SOME_OF_CLASSES(FROM ROW=0 TO ROW=R-1) ARE USED TO DEFINE THE CLASS IN ROW =R AND THIS CONDITION IS STRICTLY STRONG CONDITION (WHERE MASLOWS HIERARCHY OF NEEDS (INDIVIDUAL NEEDS AND SOCIAL NEEDS ALL HAVE A STRONGLY STRICTLY QUEUED ORDER OF NEEDS AND SO THE CONCEPTS AROSE AND SO THE WORDS IN THE VOCABULARY APPEARED SO ONE AFTER ANOTHER THE NEEDS WERE EXPOSED AND THE NEXT LEVEL NEEDS GENERATED AND SO NEXT LEVEL AWARENESS CAME TO HUMANS MIND SO NEXT LEVEL ATTENTIVENESS CAME TO HUMANS MIND SO THE NEXT LEVEL CONCEPT AROSE TO HUMANS MIND ANS SO UNTIL ALL THE I<R CONCEPTS ARE GENERATED INTO THE MASS AWARENESS (MASS ATTENTIVE NESS / MASS COMMON UNDERSTANDING / MASS ACCEPTANCES/ MASS PERCEPTIONS OF NECESSITY ...) WE CANNOT HAVE THE CONCEPT AT WORD R=R ... SO STRICT STRONG CONCEPT FORMATIONS AND ACCEPTED CONCEPTS IN THE SOCIETY ARE STRONGLY UNIQUELY QUEUED STRICTLY QUEUED (IF NO OUT SIDE MANIPULATIONS OCCUR THERE) IF THE ORDER BREAKS THEN SYSTEMS DONT COMPILE AND THAT MEANS SURELY SOME MANIPULATIONS OCCUR IN THE SOCIETY AT THAT POINT... SOME INJUSTICE OCCURS AT TAHT POINT...
////// AXIOMS 0.003 AFTER THE DATABASE IS PREPARED (DATABASE IS THE DICTIONARY WITH 2 COLUMNS WHERE COLUMN 1 HAS ONLY ONE WORD AND COLUMN 2 HAS SOME WORDS {W_I} SET OF WORD TOKENS ... COLUMN 2 WORD TOKENS ARE INSTANCE VARIABLES OF PRE COMPILED CLASSES (ASSUMING THAT ALL THE PRECOMPILED CLASSES ARE ENTERED IN PREVIOUS ROWS OF THE DICTIONARY ... IF THE PREVIOUS ROWS DONT HAVE W_I THEN THE W_I IS NOT COMPILED SO WE CANNOT CREATE INSTANCES OF W_I IN THE CURRENT ROW =R STRICTLY I<R AND IN THIS WAY THE WHOLE WORD WEB LIKE DATABASE IS STRICTLY ORDERED WHERE ALL THE CLASSES ARE COMPILED (IF NOT COMPILED AT ANY POINT OF R THEN THERE IS MANIPULATION DONE AND WHOLE MASLOWS HIERARCHY OF NEEDS ARE CRUMBLED DUE TO THAT ROW R ENTRY... THE LEVEL OF SUCH CRUMBLING OF THE STRUCTURE IS MEASURABLE THROUGH NUMBER OF OTHER WORDS(CLASSES) IN THE DICTIONARY DEPENDS ON INSTANCE VARIABLES OF CLASS AT R W_R... IN THIS WAY WE CAN FIND THE WEIGHT OF MANIPULATEDNESS IN THE JUSTICE SYSTEMS AND THE DEGREE OF MANIPULATEDNESS IN THE ENTIRE SOCIAL STRUCTURES ARE EASILY EVALUATED ... SIMILARLY WE CAN EMPYRICALLY CALCULATE THE MANIPULATED POLICY IN A SOCIAL SYSTEM SIMPLY THROUGH THE DISCREPANCY OF THE DICTIONARY NON COMPILABILITY POINTS IN THAT SOCIETY(SOCIAL VOCABULARY ND COMPILABILITY STATUS OF THESE CLASSES IS SUFFICIENT TO MEASURE THE JUSTICE STRUCTURES , MANIPULATIONS LEVELS PROBLEMS IN THE SOCIETY... WE CAN EASILY CONSTRUCT CONCRETE METRIC OF AWARENESS_RATIO , SENSITIVITY_RATIO , ATTENTIVENESS RATIO IN THE SOCIETY THROUGH THE CROSS TABS REPORTS GENERATED THROUGH THE VOCABULARY QUEUED DATA AND THE POPULATIONS DATA SURVEYS. THESE DATA SURVEYS ARE SUFFICIENT TO IDENTIFY THE THREE IMPORTANT RATIOS (PROBABILITY IS NOT GOOD KIND OF MEASURE FOR THESE KIND OF STRONG REASONING FRAMEWORKS)
////// AXIOM OF RATIO FINDINGS IF THERE ARE N WORDS(CLASSES) IN THE SOCIETY OF G NUMBER OF PEOPLES AND A SPREADSHEET IS HAVING G ROWS AND N+1 COLUMNS WHERE COLUMN 1 ROW>=2 TO ROW= G HAS THE PERSONS_UNIQUE_SOCIAL_IDENTITY_NUMBERS AND ROW=1 (COLUMN 2 TO COLUMN N+1) HAS THE CLASS NAMES (WHICH ARE COMPILED PROPERLY FOR JUST NON MANIPULATED SOCIETY OR NOT COMPILED DUE TO MANIPULATIONS , INJUSTICE , CRUMBLED HIERARCHY OF NEEDS , ETC...) AND WE PUT THE WEIGHTAGES OF AWARENES SCALES (0 TO 100 ) FOR EACH CELLS IN SUCH SPREADSHEET AND THE DISTRIBUTIONS OF SUCH VALUES GIVE US CLEAR PICTURES ABOUT HOW MUCH OF THE MANIPULATED CLASSES ARE GOVERNING THE WHOLE SOCIETY SIMILARLY FOR THE ATTENTIVENESS SCALES (0 TO 100) ARE FILLED FOR THE CELLS IN A SIMILAR OTHER SPREADSHEET AND SIMILARLY ANOTHER SIMILAR SPREADSHEET HAS THE SENSITIVITY VALUES (0 TO 100) SCALES ARE USED... IN THIS WAY WE CAN CONSTRUCT A GOOD EMPYRICAL FRAMEWORK FOR THE SANJOY NATH'S QHENOMENOLOGY REASONING SYSTEMS EMPYRICAL FRAMEWORKS SUCH THAT WE CAN USE THESE KIND OF STATISTICS TO UNDERSTAND THE EFFECTIVENESS OF JUSTICE SYSTEMS AND SOCIAL STRUCTURES...
/// </R>
/// </summary>
////// Axiom 1
//////Probability is backdated measure in sociology.Sanjoy Nath's qhenomenology reasoning system starts with assumption that all vocabulary words are just meaningless cpp classnames and the ordering of these vocabulary words dependent upon compilability ordering.this means while writing the dictionary you cannot use any word in right side (description side column 2) until all these words in description are well defined previously before that word is definitely defined before (in any previous row of same dictionary) . right side description is constructor of CPP class where left side column contains class names.This implies say any word at row r column 1 is described in row r column 2 and all word tokens (words used in column 2 are ensuried to present in any row<r column 1 of that same dictionary.untill column 1 row<r of dictionary contains a word w_i where I<r we cannot use w_i in right side column 2 in r th row. This strict condition is unique reasoning basis in Sanjoy Nath 's qhenomenology reasoning system.Ordering of basis objects and dependent objects are constructed following CPP compilability ordering.all vocabulary words are just unique Class names and are all uniquely QUEUED in column 1 of dictionary and exhaustive such queuedness describe the reasoning system of whole society. Regular use vocabulary, regular used queuedness of such concepts as CPP class descrbes the individual and society.This way CPP strictly ordered definition of classes compilability prooves meaningfulness. If the ordering alters, CPP project turns to non compilable.non compilability implies fallacy.noncompilability imples meaninglessness.strict QUEUED ness of vocabulary words (as concepts) are followed such that whole CPP project(dictionary or story or tender documents or legal documents are compilability check able)
//////Axiom 2
//////Sanjoy Nath 's Qhenomenology reasoning system takes awareness_ratio,attentiveness_ratio and sentitivity ratio as the alrernative measures which are more powerful predictability metric than probability
//////Take all population data(population of agents in a society) indexed and stored in rows of column 1 of a spreadsheet and all dictionary words(as qhenomenologically ordered queued in n rows of dictionary database column number 1 ) are now transposed and copied to analysis spreadsheet and pasted to row 1 n columns following ordering rules of axiom 1 (the axiom 1 rows of column 1 is now transposed to row 1 ,n columns for qhenomenology reasoning analysis spreadsheet.
//////Now we check how many individuals in society are aware about which concepts (listed in row 1 , n columns of qhenomenology reasoning analysis spreadsheet).same style is used for design of weightage calculation metrics for awareness,attentiveness, sensitivity like measurement over society and these distribution are used to predict society structure.
//////Axiom 3
//////All assumption or tautology(UPTO BOOLEAN SYSTEMS TAUTOLOGY FOR REASONING CONNECTIVES ARE ALLOWED) are ignored and strict definitely defined words, concepts are used following axiom 1.all documents, all stories, all essays, all poems...are ordered following axiom 1 first.(If any supplied database for Qhenomenologically ordered dictionary terms or lookup table are not supplied then all the definitions are to supply in the text, all the tautology are necessary to supply in the text here in the content
//////}
//UNTIL THE BOOLEAN LOGIC FREGES LOGIC CANTORS LOGIC RUSSSELS LOGIC TYPE THEORY , SET THEORY WAS THERE IT WAS NOT POSSIBLE TO FORMALIZE THE COMPUTATION (THEORETICAL COMPUTATIONS) . THE BIT (NO/YES) SYSTEMS AND THE BINARY NUMBER SYSTEMS ARE THE BASIS FOR THE ELECTRONIC WAYS TO DEFINE THE CONCEPTS OF COMPUTATIONS. THEN THE PROCESSOR ARCHITECTURES WERE DEFINED DESIGNED AND CONSTRUCTED. THEN KEYBOARD ASCII SYSTEMS WERE DESIGNED (FIRST DEFINED CONCRETIZATIONS OF ABSTRACT CONCEPTS TURNED INTO THE CLARITY TO TEAM MEMBERS OF THE WHOLE PROCESS (THAT IS SOCIAL AWARENESS OF SOME FUNDAMENTAL THINGS ARE IMPORTANT TO PROCEED TO NEXT STAGES OF DEVELOPMENT AND NEXT STAGES OF CONCEPTS ARISE ONLY AFTER THE PREVIOUS BASIS CONCEPTS ARE CLEARED CONCRETIZED TO SOCIETY TO THE LEVEL OF REGULAR USES AND WHEN ALL MEMBERS IN TEAM/(SOCIETY AS TEAM) CONCRETIZED THE IDEA TO USABLE PRACTICALLY AND THEN NEXT LEVEL CONCEPTS GET PLATFORMS TO ARISE OTHERWISE NEXT LEVEL OF CONCEPTS DONT ARISE IN HUMANS MIND... THIS IS THE FUNDAMENTAL CONCRETE QUEUEDNESS REASONING BASIS THAT SANJOY NATH CONSIDERS AS THE BASIS OF PRACTICAL REASONING AND NEURAL NETWORK IS SECONDARY OR ALMOST IRRELEVANT IN THIS REASONING PROCESS... THE STRICT ORDERLINESS STRICT COMPARABILITY STRICT RECURSIVE STAGE WISE CONCRETIZATIONS STRICT QUEUEDNESS OF CONCEPT CONCRETIZATION IS THE FUNDAMENTAL BASIS FOR SANJOY NATH'S QHENOMENOLOGY REASONING SYSTEM WHERE TOPOLOGICAL CLASSIFICATION OF CONCEPTS IS ALSO NECESSARY SO NUMBERING OF THE CONCEPTS AND QUEUEDNES OF EVERY WORDS(AS C++ CLASS ALL OF WHICH ARE CONCRETE CLASS NO ABSTRACT CLASS IS ALLOWED SINCE CONCRETIZED CONCEPTS ARE USED FOR NEXT LEVEL CONCEPTS AND WHEREVER NON CONCRETE CONCEPTS ARE INTRODUCED TO USE THAT ... IT IS TO MANIPILATE THE SYSTEM SO SANJOY NATH TRIES TO DESIGN THE QHENOMENOLOGY REASONING SYSTEM SUCH THAT NO HUMAN CAN DO ANY KIND OF MANIPULATIONS WHILE DOING REASONING) THIS REASONING SYSTEM TAKES CARE FOR ALL HUMANS WORDS AND THEN PROCEED TO NEXT WORD... UNTIL THE CONCEPT OF GOD IS CONCRETIZED NO ONE CAN CONCRETIZE THE CONCEPT CHURCH...C FOR CHURCH AND G FOR GOD SO GOD COMES FIRST CHURCH COMES AFTER THAT... WHOLE DICTIONARY NEEDS TO REVISE ITS QUEUEDNESS AS PER QHENOMENOLOGY REASONING SYSTEMS... ALL HUMANS REASONING ARE MANIPULATED REASONING SYSTEMS AND HUMANS INVOLVE EMOTIONS AND DO BIASED REASONING... EVEN NEURAL NETWORKS USE HUMANS DATA SO IT IS ALSO FLAWED... STRICTLY DONT USE NEURAL NETWORK TO GET BIAS FREE REASONING SYSTEMS... STRICTLY FOLLOW THE COMPILER (LEXER PARSER TO COMPILER LIKE VERIFICATIONS TO ALL HUMANS SENTENCES...SANJOY NATH'S QHENOMENOLOGY REASONING SYSTEM ENFORCES ENFORCES STRICTLY ENFORCES TO SPLIT ALL HUMANS ALL KIND OF SENTENCES AS IF ALL THESE THINGS ARE CONCRETE C++ CLASSES AND THE PRECOMPILATION OF ONE CLASS IS STRICTLY IMPORTANT BEFORE DEFINING NEXT LEVEL CLASS... FOR EXAMPLE UNTIL BIT CLASS IS DEFINED , COMPUTER ARCHITECTURE SYSTEMS CLASS IS NOT POSSIBLE... UNTIL PROCESSOR ARCHITECTURES ARE NOT CONCRETIZED WITH THE CONCEPTS OF BIT STRINGING WE CANNOT CONCRETIZE THE BIT STRINGS NOR BIT STREAMS ...SO STRING OF BITS CLASS GETS CONCRETIZED... STRINGS OF BITS ... STREAMS OF BITS ARE MORE FUNDAMENTAL THAN BYTE CLASS... THEN THE CHUNK OF BITS CLASS IS CONCRETIZED ... THEN COMPILED ... THEN ONLY WE CAN THINK OF LEAST SIGNIFICANT BITS ...MOST SIGNIFICANT BITS CLASSES AND THEN ONLY NIBBLE CLASS GETS COMPILED... THEN ONLY BYTE CLASS GETS COMPILED... THEN ONLY INPUT OUTPUT STREAM CLASSES ARE ALLOWED TO COMPILE... THEN ONLY THE BYTE TO CHAR AND CHARACTER CLASS ARE POSSIBLE TO CONCRETIZED SO CHARACTER CLASS IS SUB CLASS OF BIT CLASS .. BYTE CLASS... IN THIS WAY NEXT LEVEL DATATYPES ARE INTEGER CLASS ... THEN FLOAT CLASS... THEN DOUBLE CLASS ETC......... SO DICTIONARY (VOCABULARY ) ARE ALSO GENERATED THROUGH CONCEPT CONCRETIZATIONS...STRICT CONCEPT CONCRETIZATIONS ARE DONE STRICTKY STAGEWISE AND RECURSIVELY ONE CLASS CONCRETIZED COMPILED THEN NEXT LEVEL CLASS IS DEFINABLE... IN THIS WAY ALL HUMANS VOCABULARY ARE CONCRETIZED (C++ CLASS WRITEN ONE AFTER ANOTHER... ONE STAGE COMPILES FIRST THEN NEXT STAGE COMPILES... NO REASONING ARE ALLOWED UNTIL PREVIOUS LEVEL CLASSES(VOCABULARY WORDS ARE JUST MEANINGLESS C++ CLASSES) COMPILES STAGEWISE AND THEN WHOLE DICTIONARY (HUMANS VOCABULARY SYSTEMS FOLLOW STRICT COMPILABILITY CLOSURE PRINCIPLES AS PER SANJOY NATH'S QHENOMENOLOGY REASONING SYSTEMS)GETS COMPILED STAGEWISE
//ACTUALLY QHENOMENOLOGY IS DONE FOR THE STRICT QUEUEDNESS ANALYSIS STRICT STACKEDNESS ANALYSIS STRICT DEPENDENCY CHAINS ANALYSIS
////// Axiom wise talks in Qhenomenology reasoning system
////// Proposition Example "Conscuousness" is Just an english word Its Just a cpp class name which if compiles means prooves its existence.if any class dont compile then that class dont exist yet now we will try to check can we have compilability for consciousness class?
////// What other classes are necessary to define consciousness class? Consciousness class constructor obviously use some instances of other classes(those other classes are more independent classes than consciousness class) untill those more independent classes are completely COMPILED we cannot create their instance variables inside constructor of consciousness class. Same system of checking necessary for all dictionary words in qhenomenology reasoning system.
////// Axiom All human emotions are also just cpp class name They dont have any meaning
////// Axiom Dictionary has no words All words are just cpp class names Some class compiles first before other classes and more independent classes compile before.more dependent classes compilable later.this compilability ordering governs dictionary order.alphabetical ordering not allowed
////// Axiom Whichever class is more independent compiles before and dictionary orders are created as per independent class names come before dependent class names in dictionary
////// Axiom Every cpp class in this system can have overridable main method and these are strict not static . None of members in these classes are allowed to have static members.All the members in every classes are non static.
//////Axiom
//////Humans interventions cannot enforce compilability.Compilers follow strict grammars and dont bother humans intentions but consistency from base class to current class governs strength of bias free fallacy free ambiguity free reasoning so reasoning consistency areverified.at each stage of classdefinitions.Compilability itself is the proof of meaningfulness in Sanjoy Nath's qhenomenology reasoning system.
//////We analyse any proposition or text using this style of reasoning when using Sanjoy Nath 's qhenomenology reasoning system
// AXIOMS BEFORE AXIOM 1 //ACTUALLY QHENOMENOLOGY IS DONE FOR THE STRICT QUEUEDNESS ANALYSIS STRICT STACKEDNESS ANALYSIS STRICT DEPENDENCY CHAINS ANALYSIS
//SANJOY NATH'S PHILOSOPHY OF QHENOMENOLOGY (QUEDNESS IN EVERY PHENOMENON TRANSFORMABLE TO STACKEDNESS AND STACKS TO QUEUE OR QUEUE TO STACK FIFO O LIFO LIFO TO FIFO RANDOMIZABLE TRANSPARENT STACKS NON REARRANGABLE QUEUES TO REARRANGABLE QUEUES , PARTITIONABLE PRIME NUMBERS(WE KNOW WE CAN DO ADDITIVE PARTITIONING OF PRIME NUMBERS ARE ALSO POSSIBLE WE KNOW THAT ADDITIVE PARTITIONING OF ANY WHOLE NUMBER IS POSSIBLE AND WE CAN CHOOSE ANY PARTITION FROM ONE WHOLE NUMBER AND RECOMBINE SOME OF PARTITION COMPONENTS OF WHOLE NUMBERS TO GET OTHER WHOLE NUMBERS THERE ARE CATALAN STYLES OF PARTITIONING RAMANUJAN STYLES OF PARTITIONING AND OTHER STYLES OF MULTIPLE COUNTING TO DO COMBINATORIAL CONCLUSIONS) IN WAVES SANJOY NATH DONT BREAK COMPONENTS OF WAVES AS SINUSOIDAL COMPONENTS INSTEAD SANJOY NATH REARRANGES THE TIME LINE PORTIONS TO FIND THE TIME SEGMENTS TO DO THE WAVE ANALYSIS WITH CHOSEN SUB QUEUE OBJECTS IN THE TIMELINE WHERE PHILOSOPHY OF WAVE ANALYSIS IS DONE THROUGH FINDING THE RIGHT GROUPS OF ZERO CROSSING POINTS WHICH COMPLETE CYCLES SUCH THAT CONTAINER AABB OBJECTS ARE CONSTRUCTED... THESE CONTAINER AABB OBJECTS CONTAINS SEVERAL SUBQUEUE OF CREST AABB OBJECTS AND TROUGH AABB OBJECTS) NOW WE WILL DESCRIBE THE SPECIALIZED TOPOLOGY TERMS SPECIALIZED GEOMETRY TERMS TO CLASSIFY THE CREST AABB OBJECTS AND TROUGH AABB OBJECTS SUCH THAT WE CAN CLASSIFY THE CREST ABB OBJECTS AND CLASSIFY THE TROUGH AABB OBJECTS SUCH THAT WE CAN IMPLEMENT THE CLASSIFICATIONS NUMBER SYSTEMS (AS WE DO IN THE BUILDING INFORMATIONS MODELING PHILOSOPHY WHERE BUILDING BLOCKS ARE NUMBERED (AS WE DO IN TEKLA REVIT CAD ETC... SUCH THAT WE CAN PREPARE BILL OF QUANTITIES OF THE SIMILAR KIND OF CLASSIFIED OBJECTS) IN SANJOY NATH'S QHENOMENOLOGY OF WAVES ANALYSIS CREST AABB OBJECTS AND TROUGH AABB OBJECTS CAN HAVE THE CLASSIFICATION CATEGORIZATION NUMBERING PROCESS TO CLASSIFY THE CREST OBJECTS AND TROUGH OBJECTS SUCH THAT WE CAN IDENTIFY THE SPECIFIC NATURES OF CREST AABB (TOPOLOGICALLY AND GEOMETRICALLY ) SUCH THAT WE CAN CLASSIFY THE SPECIFIC NATURES OF TROUGHAABB TYPE OBJECTS ( THESE ARE THE CORE BUILDING BLOCKS OF THE WAVE SIGNAL OBJECT INSTEAD OF THE SUPERPOSITION OF THE COS SIN COMPONENTS IGNORING THE COS COMPONENTS SIN COMPONENTS AS WAVE CONSTRUCTOR) SANJOY NATH REMODELS THE WAVE LIKE SIGNALS AS THE COMBINATORIALLY CHOSEN SUBQUEUE OBJECTS OR CHAINED QUEUE OBJECTS QUEUE OF CREST AABB OBJECTS AND TROUGH AABB OBJECTS OUT OF WHICH SOME SUBQUEUE FORMS COMPLETE WAVE CYCLES WITH TIME PERIODS AND WAVE LENGTHS. THE CONTAINER AABB OBJECTS CONTAINS THE COMPLETE CYCLE AND THESE CONTAINER AABB OBJECTS ALSO HAVE COMBINED CENTER OF GRAVITY (CG OF ALL TIP POINTS OF ALL CONTAINED SAMPLE AMPLITUDES IN THE WHOLE CONTAINER AABB OBJECTS) THE NUMBERING METHODS (BIM LIKE BUILDING INFORMATIONS MODELING LIKE NUMBERING TO CLASSIFY THE CREST AABB OBJECTS(SUB PART FABRICATIONS BUILDING BLOCKS ) , TROUGH AABB OBJECTS(SUB PART FABRICATIONS BUILDING BLOCKS) , CONTAINER AABB OBJECTS (ASSEMBLY OF SEVERAL PARTS HAVE DIFFERENT NUMBERING SCHEMES TO CATEGORIZE TOPOLOGICALLY GEOMETRICALLY CATEGORIZE TOPOLOGICALLY AND GEOMETRICALLY AND NUMBERED AS PER COMPLEXITY AND FABRICABILITY AS WE DO IN THE BUILDING INFORMATIONS MODELING SYSTEMS NUMBERING TO PREPARE CLASSIFIED TABLES OF BILL OF MATERIALS AND COUNTING NUMBER OF SAME CATEGORY OBJECTS AS BUILDING BLOCKS)IDENTIFY AND THEN THE BILL OF QUANTITY ARE ALSO DIVIDED AS PER TRANPORTATION SEQUENCE NUMBERING , CONSTRUCTIONS PHASING NUMBERS ETC...... IN THE SAME WAYS SANJOY NATH CONSIDERS SAME CONTAINER AABB OBJECT ARE SQUIZABLE (SCALED DOWN HORIZONTALLY OR SCALED DOWN VERTICALLY SCALING (DOWN SCALING OR UPSCALING WHATEVER) DONT CHANGE TOPOLOGY_NUMBER OF THE CONTAINER AABB OBJECTS ) THE TOPOLOGICAL PROPERTIES OF CONTAINER AABB OBJECTS OR GEOMETRIC PROPERTIES OF CONTAINER AABB OBJECTS ARE SUCH INVARIANT PROPERTIES OF THE CONTAINER AABB OBJECTS (OR ANY CREST AABB OBJECT OR TROUGH AABB OBJECTS ) WHICH DONT ALTER EVEN WE SCALE DOWN THE THINGS OR SCALE UP THE THINGS ... EXAMPLE OF SUCH TOPOLOGICAL PROPERTIES ARE NUMBER OF LOCAL MINIMA PRESENT , NUMBER OF LOCAL MAXIMA PRESENT , NUMBER OF SAMPLES PRESENT , NUMBER OF NEGATIVE SAMPLE PRESENT IN CONTAINER AABB , NUMBER OF POSITIVE SAMPLES PRESENT IN THE CONTAINER AABB , NUMBER OF POSITIVE AMPLITUDES INVOLVED IN MONOTONICALLY INCREASING AMPLITUDE SETS IN CREST AABB (IN CONTAINER AABB ) , NUMBER OF POSITIVE AMPLITUDES INVOLVED IN MONOTONICALLY DECREASING AMPLITUUDE SETS(IN THE CREST AABB(OR IN CONTAINER AABB) , SIMILARLY FOR TROUGH OBJECTS NUMBER OF NEGATIVE AMPLITUDES INVOLVED IN MONOTONICALLY DECREASING(INCREASING NEGATIVE VALUES) IN A TROUGH AABB OBJECT (OR IN A CONTAINER AABB OBJECT) SIMILARLY NUMBER OF MONOTONICALLY INCREASING (DECREASING NEGATIVE VALUES)AMPLITUDES PRESENT IN THE TROUGH OBJECT (OR IN THE CONTAINER AABB OBJECT ... THEN CONSIDERING THE NEIGHBOURHOOD TOPOLOGY PROPERTIES IN STRICT QUEUEDNESS OF CRESTS AND TROUGHS (WHICH NEIGHBOUR TO NEIGHBOUR VISCINITY SAMPLES PROPERTIES ARE ALSO TOPOLOGICAL PROPERTIES WHICH ARE ALSO INVARIANTS AND USED TO CLASSIFY THE AABB OBJECTS OF EVERY KIND AND THESE PROPERTIES ALSO NOT CHANGE IF WE SCALE DOWN OR SCALE UP THE AABB OBJECTS.. FOR EXAMPLE IF WE TEMPORARILY ARRANGE ALL THE SAMPLES PRESENT IN THE AABB OBJECT AND RANK THE AMPLITUDES ABSOLUTE LENGTHS IN ASCENDING OR DESCENDING ORDER WE GET THE RANKS OF THE AMPLITUDES IN PARTICULAR AABB OBJECTS) NOW IF WE CLASSIFY THE RANKING OF THESE AMPLITUDE VALUES FOR ALL AMPLITUDES IN AABB OBJECTS THEN WE CAN HAVE THE RANK VALUES OF LEFTMOST AMPLITUDE IN ANY PARTICULAR AABB OBJECT AND WE CAN ALSO GET THE RANK NUMBER OF THE RIGHTMOST AMPLITUDE FOR ANY PARTICULAR AABB OBJECT) ... THESE RANKINGS ARE ALSO TOPOLOGY PROPERTIES WHICH DONT CHANGE WHEN WE SCALE DOWN THE AABB OBJECT OR SCALE UP THE AABB OBJECTS... THESE RIGHTMOST RANK OF N_TH AABB OBJECT AND LEFTMOST RANK OF (N+1)TH AABB OBJECT DECIDES THE INTERFACING NEIGHBOURHOODS PROPERTIES... TO DO MORE STRONGER INTERFACING CHECKING WE CAN TAKE RIGHTMOST 3 RANKS OF CURRENT AABB TO THE LEFTMOST 3 RANKS OF NEXT AABB WHICH CAN HELP US CLASSIFY THE NEIGHBOURINGNESS OF QUEUED STRUCTURES AND THESE INTERFACINGNESS NEIGHBOURHOODS ARE ALSO CLASSIFIABLE SO WE CAN DO THE NUMBERING(PURE TOPOLOGICAL SCHEMATIC NUMBERING OF ZERO CROSSING POINTS ) AND THESE ZERO CROSSING POINTS CAN HAVE JUNCTIONS CLASSIFICATIONS NUMBERING WHICH ARE ALSO INVARIANT (SINCE THESE ARE TOPOLOGICAL ) AND THIS WAYS WE CAN CLASSIFY THE NATURES OF ZERO CROSSING POINTS AND EVEN IF WE SCALE DOWN OR SCALE UP ANY CONTAINER AABB AT ANY LOCATION , THESE DONT ALTER THE NATURES OF ZERO CROSSING POINTS (IF THE DC OFFSETTING(VERTICAL SHIFTING OF ZERO AMPLITUDE LINE REFERENCE LINE TO FIND ZERO CROSSINGS ) ARE NOT DONE(NO CHANGE OF ZERO LINE ONCE NUMBERINGS ARE DONE... EVERY TIME WE NEED TO RENUMBER EVERYTHING WHEN WE CHANGE THE REFERENCE ZERO AMPLITUDE LINES ) IN THE MIDDLE OF THE PROCESS)... SO THE BUILDING INFORMATIONS MODELING TECHNICS ARE USED DRASTICALLY FOR TOPOLOGICAL NUMBERING SYSTEMS , GEOMETRIC NUMBERING SYSTEMS TO CLASSIFY EACH AND EVERY ZERO CROSSING POINTS... THE ZERO CROSSING POINTS ARE CLASSIFIED FUNDAMENTALLY AS CREST TO TROUGH TYPE OR TROUGH TO CREST TYPE OT TROUGH TO TROUGH TYPE(WHEN ONE TROUGH ENDS AT ZERO AMPLITUDE THEN AGAIN ANOTHER TROUGH STARTS WITHOUT ENTERING INTO ANY CREST) , SIMILARLY CREST TO CREST ZERO CROSSING CAN ALSO OCCUR WHERE NO INTERMEDIATE TROUGH OCCUR... IN THIS WAY WE CAN CLASSIFY THE REGIONS OF CONTIGUOUS SILENCES ALSO sO WE CAN HAVE THE FUNDAMENTAL TOPOLOGICAL CLASSIFICATIONS ON TIME LINE AS SS MEANS SILENCE CONTINUING... SEQUENCE OF SSSSSSSSSSSSSS (CHARACTER COUNT OF SSS... MEANS A LONG CHAIN OF SILENCES ZERO AMPLITUDE NO CREST NO TROUGH ARE THERE TOPOLOGICALLY THIS IS A KIND OF TOPOLOGICAL REGION ON TIMELINE OF WAVES ... SIMILARLY THERE ARE CREST TO TROUGH CT TYPE REGIONS TT TYPE REGIONS TROUGH TO1 SAMPLE SILENCE IN BETWEEN ... SIMILARLY WE CAN HAVE THE CC TYPES OF TOPOLOGICALLY CLASSIFIED ZERO CROSSING ON TIME LINES CREST TO CREST (ONE SAMPLE SILENCE IN BETWEEN TWO CONSEQUETIVE CRESTS) SIMILARLY WE CAN HAVE CREST TO TROUGHS CT TYPE CASES (WITH RANKED SAMPLES INTERFACINGS AS DISCUSSED) SIMILARLY WE CAN HAVE TC TYPES OF NUMBERING FOR THE ZERO CROSSING POINTS ... WE CAN HAVE ST OR TS (SILENCE TO TROUGH OR TROUGH TO SILENCES ZERO CROSSINGS TOPOLOGY) WE CAN HAVE SC OR CS (SILENCE REGION ENDS AND CREST STARTS OR CREST ENDS AND ENTERS SSSSSS REGIONS ... INTHIS WAY WE CAN CLASSIFY THE ZERO CROSSING POINTS WITH NEIGHBOURHOOD AMPLITUDES RANKS (1 RANK FROM LEFT 1 RANK FROM RIGHT IS OK BECAUSE SEVERAL CASES CAN HAVE ONLY 2 SAMPLE IN CREST OR 2 SAMPLE IN TROUGH WHICH ARE VERY COMMON IN 8000 SAMPLES PER SECOND CASES AS SANJOY NATH HAS FOUND IN 380000 WAV FILES EXPERIMENTS) SO THE TOPOLOGY DEPENDENT NUMBERING SCHEMES OF JUNCTIONS ARE VERY IMPORTANT TO UNDERSTAND CLASSIFICATIONS OF CREST AABB , TROUGH AABB , ZERO CROSSING NEIGHBOURING JUNCTIONS CLASSIFICATIONS AND FROM THESE WE CAN FIND THE REPEAT NATURES OF SIMILAR KINDS OF JUNCTIONS ON THE TIMELINES AND WE CAN EASILY COUNT (USING THE REGULAR EXPRESSIONS ON JUNCTION TYPES ON THE TIMELINES TOPOLOGICALLY) TO IDENTIFY THE NUMBERS OF DIFFERENT KINDS OF CONTAINER AABB OBJECTS PRESENT IN WHOLE QUEUED AABB OBJECTS WHICH ARE FORMING THE QHENOMENOLOGICAL REASONING ON THE WAVE SIGNAL OBJECTS... SCALING OF AABB OBJECTS WILL NOT CHANGE TOPOLOGICAL NUMBERING CLASSIFIERS OF AABB OBJECTS... SANJOY NATH'S PHILOSOPHY OF QHENOMENOLOGICAL REASONING SYSTEMS CONVERTS THE TIME LINE OF WAVES AS REGULAR EXPRESSION PROBLEM (OR GRAMMAR PARSER SYSTEM , COMPILER LIKE VERIFIER SYSTEMS ON THE CLASSIFIED ZERO CROSSINGS AS STRINGS CREST AABB OBJECTS AS SYMBOLS , TROUGH AABB OBJECTS AS SYMBOLS , CONTAINER AABB OBJECTS AS SYMBOLS AND SEQUENCE(STRICT QUEUE OF SYMBOLS ARE FILTERAABLE WITH REGULAR EXPRESSIONS AND THE PATTERN MATCHING PROBLEMS APPLICABLE ON THE WAVE SIGNAL OBJECTS) THIS MEANS THE WHOLE DIGITAL SIGNAL PROCESSING SYSTEMS TURN INTO TOPOLOGICALLY NUMBERED SYMBOLS AND SEQUENCE OF SUCH SYMBOLS MEANS IT IS STRINGOLOGY NOW AND STRINGS ARE PARSABLE IN SEVERAL STYLES TO HAVE GRAMMAR LIKE SYNTAX LIKE PARSING SYSTEMS AND COMPILABILITY CHECKING AND CLOSURE PRINCIPLES USED TO HAVE ALGEBRAIC STRUCTURES ON THE WHOLE TIMELINE AS STRINGS OF SYMBOLS...
//SANJOY NATH HAS TESTED WITH 380000 WAV FILES OF 8000 SAMPLES PER SECOND 16 BIT (FLOAT SAMPLE BIT DEPTH NOT SHORT IS PREFERED SINCE THE SHORT DATATYPE IS NOT KEEPING SUFFICIENT DETAILS ) THEN SANJOY NATH HAS FOUND THAT THE ALL SAME AMPLIUTUDE (-1 0 OR +1 ONLY DB SCALES AMPLITUDE) KEEPS SAME LEVEL OF UNDERSTANDABLE DETAIL IN THE MUSIK OR OTHER SOUNDS EVEN THE WAVE FORMS ARE NOT PRESERVED . SO THE WAVE FORMS INFORMATIONS DETAIL ARE NOT TOO MUCH INFORMATIVE AND ONLY TOPOLOGY OF THE CRESTS AABB AND TOPOLOGY OF TROUGH AABB ARE SUFFICIENT TO EXTRACT THE INFORMATIONS IN WAVE SIGNALS WHICH ARE QUE OF PURE RECTANGLE LIKE CRESTS AND PURE RECTANGLE LIKE TROUGHS . THE COMPLICATED HARMONIC SUPERPOSITIONS OF SEVERAL SIN COMPONENTS NOT NECESSARY NOR SEVERAL COS COMPONENTS ARE NECESSARY TO KEEP SUFFICIENTLY DISTINGUISED SONG INFORMATIONS EVEN THE SAMPLES OF VALUES OF -1 , 0 , +1 ARE SUFFICIENT TO GET THE PROPER WORKINGS , PROPER TUNES , PROPER PERCUSSIONSPOSITIONS.... THE PATTERNS OF SILENCES AND PATTERNS OF BUNCH OF INTERMITTENT QUEUED NATURES (QUEUING PATTERNS OF SAME SIZED AMPLITUDES ARE SUFFICIENT TO LISTEN THE SONGS , TONALITY , PERCUSSIONS , CNC VIBRATIONS DATA DISTINCTIVE FEATURES , BUILDING INFORMATIONS MODELING VIBRATIONS INFORMATIONS , STRUCTURAL HEALTH MONITORING VIBRATIONS RELATED INFORMATIONS INFORMATIONS EXTRAACTIONS) VERTICAL NEGATIVE LINES OR BUNCH OF VERTICAL EQUAL SIZED POSITIVE AMPLITUDES ARE SUFFICIENT TO DISTINGISH THE VOICES , DISTINGUISH SOUND INSTRUMENTS , , TO DISTINGUISH THE TONALITY GLIDING EFFECTS PITCH BENDS EFFECTS , KEY PRESSURE FEATURES ETC... WHY ????????????????????? WHAT IS THE CAUSE BEHINGD SUCH NON DISTINGUISHABILITY?????????????? ANOTHER DOUBT IS THAT IF WE TAKE DIFFERENT PROPORTIONS OF NEGATIVE ALL EQUAL SIZED AMPLITUDES AND DIFFERENT PROPORTIONS OF ALL EQUAL POSITIVE AMPLITUDES CAUSE THE SAME LEVEL OF INDISTINGUISABILITY????????? WILL DC SHIFT ON SUCH ALL EQUAL AMPLITUDES CASES (BASE LINE SHIFTING VERTICALLY CONSTANT AMOUNT VERTICAL SHIFT OF ZERO AMPLITUDE BASE LINE) CAUSE THE PROBLEMS IN SIGNALS QUALITY DRASTICALLY ????? WHY ????? WHAT DOES THE CONVENTIONAL WAVE SIGNAL PROCESSING SAY ABOUTH THIS?????????????????? STILL SANJOY NATH HAS DECIDED TO WORK WITH WAVE FORMS SEGMENTING. WAVE FORMS SEGMENTING IN SANJOUY NATH'S QHENOMENOLOGY PHYSICS OF WAVE HANDLES WITH THE RECTANGULAR AABB OF CREST , RECTANGULAR AABB OF TROUGHS IN STRICT QUEUE OF AABB ZIG ZAG PLACED OBJETS....... NOW AFTER EXPERIMENTING WITH THESE KINDS OF HARMONIC MIXED WAVES SANJOY NATH HAS SEEN THAT IF WE CAN IMAGINE A BIGGER CONTAINER AABB WHICH ENCLOSES A BUNCH OF CREST AABB AND A BUNCH OF TROUGH AABB CONTAINED IN A SINGLE CONTAINER AABB) WHERE THIS CONTAINER AABB OBJECTS ENCLOSES A WHOLE CYCLE OF WAVE WHERE THE LENGTH OF THIS CONTAINER AABB IS INTERPRETED AS ONE SINGLE TIME PERIOD (ONE WAVELENGTH SEGMENT WHICH CONTAINS A COMPLETE CYCLE OF WAVE FORMS) WE NEED A FITTING OF BASE LINE (PARTICULARLY FOR ASYMMETRIC WAVE FORMS OR SYMMETRIC WAVE FORMS WHATEVER IT IS) WE CAN DO PRECALCULATED DC OFFSETS OF BASE LINE SUCH THAT WE CAN DISTINGUISH THE CYCLE COMPLETIONS CRISP ZERO CROSSINGS POINTS.SO THAT AFTER CALIBRATING THE ZERO AMPLITUDE LEVEL BASE LINE WE WILL PRECALCULATE AND CALIBRATE THE BASE LINES SUCH THAT THE ZERO CROSSING POINTS WILL CLEARLY IDENTIFY WHERE A CONTAINER AABB BOUNDING BOX SHOULD START AND WHERE IT NEEDS TO COMPLETE. EVERY SUCH CONTAINER BOUNDING BOX WILL HAVE CG (CENTER OF GRAVITY CALCULATED WITH ALL SAMPLES AMPLITUDES TIP POINTS PRESENT IN THE CONTAINER BOUNDING BOX WHERE EACH CONTAINER BOUNDING BOX WILL CONTAIN A SUB QUEUE OF SOME CRESTS AND SOME TROUGHS WHERE SOME OF THESE CRESTS AND SOME OF THESE TROUGHS ARE REDUNDANT SINCE IT CARRIES EXTRA INFORMATIONS WHICH ARE NOT NECESSARY TO DISTINGUISH THE FEATURES OF A SONG ... ALL THE WORDS ARE LISTENABLE ALL THE TONALITY ARE LISTENABLE AND IDENTIFIABLE ALL PERCUSSIONS BITS ARE LISTENABLE AND DISTINGUISABLE ... THIS MEANS WE NEED THE LIMITING CASES WHERE THE MINIMUM NECESSARY INFORMATION STARTS AND WHERE THE SUFFICIENT INFORMATION STAGES COMPLETES AND WHERE THE EXCESS INFORMATION IN THE WAVE CONTENT STARTS???????????????????????? SANJOY NATH'S AABB MODEL OF QHENOMENOLOGY QUEUE STRUCTURE OF WAVE FOCUS ON THESE LIMITING CASES OF START OF NECESSARY , COMPLETE UPPER LIMIT OF SUFFICIENCY AND THE MINIMUM POINT OF CONTENT OF LISTENABLE AND JUST NOTICEABLE DISTINCTIONS OF INFORMATION WHERE EXCESS INFORMATION STARTS... SANJOY NATH HAS ALSO EXPERIMENTED AND FOUND THAT SOME OF THE CRESTS AABB (SUB PART OF WHOLE CYCLE) AND SOME OF THE TROUGH AABB ARE REDUNDANT IN THE BOUNDING BOX WHICH ARE EXCESS INFORMATIONS CARRIERS EVEN WE DO SILENCE OUT OF THESE RDUNDANT CRESTS AND SILENCE OUT THESE REDUNDANT TROUGHS THAT DONT HAMPER THE LISTENABLE DISTINGUISABLE CONTENTS OF INFORMATIONS IN THESE WAVES WHY SUCH CASES OCCUR???? WHICH THEORIES EXPLAIN THESE?????????)
// SANJOY NATH PROPOSES A TOOTH PICK MODEL FOR COMBINATORIAL QUEUE STRUCTURE OF WAVE WHICH RESEMBLES LIKE QUEUE OF CREST AABB AND TROUGH AABB PLACED ALONG THE BASE LINE IN ZIGZAG WAYS ) . TAKE A BOX OF TOOTHPICKS WHICH ARE ALL OF SAME LENGTH BUT BREAK THESE (USE PARTITIONS LIKE CATALAN AND RAMANUJAN STYLES OF PARTITIONING) AND TAKE SOME OF THESE PIECES OF TOOTH PICKS AS THE BLUE COLOURED PIECES WHICH RESEMBLES THE CREST SUBPART AABB AND SOME OF THESE PIECES AS THE RED COLOURED PIECES WHICH ARE THE TROUGH AABB OBJECT AND ALL THE PIECES OF THE PARTITIONS ARE NOT NECESSARY TO CARRY SUFFICIENT INFORMATIONS FOR NECESSARY PURPOSE. PURPOSE NECESSITY IS A LIMIT GOVERNING FACTOR AND EXCESS GOVERNING FACTOR AND THE SURPLUS GOVERNING FACTOR ... THE COMBINATORIAL NATURES OF SUCH CREST AABB AND TROUGH AABB OBJECT IS IMORTANT QUEUE STRUCTURING WHERE THE SUB QUEUE OF SOME CREST AABB AND TROUGH AABB WITHIN THE CONTAINER AABB ACTUALLY CARRY THE NON REDUNDANT NECESSARY AND SUFFICIENT INFORMATIONS)
//WHEN SAMPLES PER SECONDS ARE KNOWN FOR ANY WAVES (WAV FILES MONO CHANNEL 16 BIT FLOATING)BIT DEPTH FOR AMPLITUDES ARE THERE AND IN A FIRST SCANNING (WITH 380000 WAV FILES STUDY SANJOY NATH HAS FOUND THAT IF MEAN+STANDARD DEVIATION IS TAKEN TO FILTER OUT ABSOLUTE AMPLITUDES AND THEN TAKE 10000 AMPLITUDES FOR THE ABSOLUTE VALUES OF THE AMPLITUDES AND ENFORCING ZERO AMPLITUDES FOR WHICH THE ABSOLUTE ACTUAL WAVE FILES SAMPLE VALUE <(MEAN+1* STANDARD DEVIATION ) ARE ALL SILENCED (ENFORCING ZERO AMPLITUDES) AND REGENERATED WAV FILES WITH SAME SAMPLE COUNT ... THE WHOLE SONG REMAINS LISTENABLE AND UNDERSTANDABLE QUITE CLEARLY ... SOME NOISES OCCUR DUE TO ENFORCED ZERO AMPLITUDES THROUGH FILTERING BUT LISTENABILITY OF ALL WORDS , INSTRUMENTS , TUNES ARE NOT HAMPERED TOO MUCH) THEN WHEN WE TRY TO FILTER OUT THE NOTES WE CAN FILTER OUT NOTES... TO MIDI FILES... SO WE CAN DO THE STRICT NUMBERING OF ZERO CROSSING POINTS (AFTER FIRST TIME SCANNING COUNTING THE INDEXES OF ZERO CROSSING POINTS ARE DONE) THEN THROUGH THE ANALYSIS OF NEIGHBOUTHOODS(FEW SAMPLES ON LEFT OF ZERO CROSSING POINT AND FEW SAMPLES FROM RIGHT SIDE OF THAT ZERO CROSSING POINT ) CAN HAVE SIMILAR TOPOLOGICAL PROPERTIES WHICH DONT CHANGE DUE TO SCALING OF THE CONTAINER AABB OBJECTS... USING THIS PHILOSOPHY SANJOY NATH'S QHENOMENOLOGY REASONING ON QUEUEDNESS OF WAVE COMPONENTS(ALREADY TOPOLOGICALLY NUMBERED RENUMBERED RE RE NUMBERED REFINED NUMBERED IN N TIMES SCANNING IF NECESSARY ... CURRENTLY THE THEORY IS IN BUILDING... WE ARE TRYING TO CROSS VERIFY THE OUTPUTS WITH CONVENTIONAL THEORY OF WAVES AND CONVENTIONAL FOURIER SPECTRUMS FREQUENCY DOMAIN DATA TO CHECK IF WE ARE GETTING SAME KIND OF OUTPUTS OR BETTER OUTPUTS THAN FOURIER OR NOT...) SO WE WANT TO ACHIEVE THE PITCH BENDS MANAGEMENTS(CONSTRUCTING PITCH BENDS THROUGH MAERGE OF MONOTONICALLY INCREASING NOTES AS SINGLE START NOTE AND CLUBBING ALL THESE NOTES WITH PITCH BENDS GLIDING UPTO 2 SEMITONES AND THEN AGAIN NEW NOTE START IF FREQUENCY RANGE CHANGES BEYOND 2 SEMITONES AS PER DEFAULT MIDI STANDARDS... SIMILARLY MERGING THE NOTES (MONOTONICALLY DECREASING... DUE TO 30 SAMPLES WINDOWING TO 300 SAMPLES WINDOWING ... WHICH EVER FITS BEST AS PER GIVEN SAMPLES PER SECOND (FOR 8000 SPS 8 SAMPLES PER MILLISECOND...AS EXAMPLES) AND SANJOY NATH THINKS AT LEAST K*SAMPLES PER MILLISECONDS NECESSARY (THE VALUE OF K NEED TO CALCULATE FROM THE FIRST TIME SCANNING AND GETTING THE CHARACTERISTICS OF THE WAVES THROUGH TOPOLOGY NUMBERING DONE AT ALL ZERO CROSSING CONDITIONS AND NEIGHBOURHOOD TO IDENTIFY WHERE SIMILAR TOPOLOGY (NEIGHBOURHOOD (SCALE INVARIANT TOPOLOGY PROPERTIES OF NEIGHBOURHOOD SAMPLES REGIONS ARE IMPORTANT TO CLASSIFY THE ZERO CROSSING POINTS AND THROUGH THAT SYSTEMS WE CAN IDENTIFY THE BEST WINDOW SIZES TO IDENTIFY FREQUENCIES) SANJOY NATH'S PHILOSOPHY FOR WAVE ANALYSIS HANDLES THE ZERO CROSSING POINTS AS CONNECTORS BETWEEN TWO DIFFERENT COMPLETE CYCLES (LEFT SIDE CONTAINER AABB MEANS ONE CYCLE COMPLETE AND RIGHT SIDE CONTAINER AABB MEANS ANOTHER CYCLE STARTS) AND NUMBER OF COMPLETE CYCLES PER SECOND IMPLIES FREQUENCY WHICH IS INTERPRETED AS NUMBER OF COMPLETE CONTAINER AABB OBJECTS PRESENT IN 1 NUMBER OF SAMPLES PER SECONDS VALUES IN A MONO WAVE FILES
// AS IN THE BUILDING INFORMATIONS MODELING LIKE TEKLA , ADVANCE STEEL , REVIT SYSTEMS NUMBERING ARE IMPORTANT AND EVERYTHING HAS SOME KIND OF CONCRETELY WELL DEFINED CLASSIFICATIONS (TOPOLOGICALLY CLASSIFIED OR GEOMETRICALLY CLASSIFIED) AND EVERYTHING HAS SOME CLASSIFIED NUMBERING /TOPOLOGICAL SIMILARITY /GEOMETRICAL SIMILARITY EVERY OBJECTS HAVE SOME NUMBERS AND SO EVERY CRESTS HAVE SOME NUMBERS (GEOMETRICALLY SIMILAR OR TOPOLOGICALLY SIMILAR THINGS HAVE SAME NUMBERING SYSTEMS) BILL OF QUANTITIES ARE CONCTRUCTED AS PER SAME KIND OF NUMBERS ASSIGNED TO SAME KIND OF TOPOLOGY... ALL CREST AABB ARE CLASSIFIED THROUGH BIM LIKE NUMBERING SCHEMES ... ALL TROUGH AABB ARE NUMBERED STRICTKY FOLLOWING TOPOLOGICAL SIMILARITY GEOMETRICAL SIMILARITY KIND OF THINSS AND STRICTNOTE... THE ZERO CROSSINGS IN THE WAVES ARE ALSO NUMBERED(AS BIM PROJECTS ) WHERE ZERO CROSSING POINTS ARE CONSIDERED AS THE CONNECTIONS BETWEEN THE LEFT SIDE CONTAINER AABB OBJECT(OR PART AABB OBJECT)(WHICH IS A STUCTURAL MEMBER) AND RIGHT SIDE AABB OBJECT... AABB OBJECTS ARE PARTS OR SUBPARTS ALL HAVE SOME TOPOLOGY PROPERTY(WHOLE WAVE CAN HAVE SAME NUMBERED AABB OBJECTS PRESENT MULTIPLE TIMES WITH SEVERAL KINDS OF DIFFERENTLY SCALED ... SCALING DONT CHANGE THE TOPOLOGY... EVERY AABB OBJECTS HAVE SOME KIND OF TOPOLOGY PROPERTIES WHICH REMAINS UNALTERED DUE TO SCALING , ROTATING , TRANSLATING... BUT MIRRORING IS NOT LLOWED... IF MIRRORED THEN THE TOPOLOGY PROPERTIES OF AABB CHANGES SO NUMBERING CHANGES(AS PER SANJOY NATH'S QHENOMENOLOGY WAVE THEORY REASONING SYSTEMS) SO FIRST ALL ZERO CROSSING POINTS ARE IDENTIFIED AND NO NUMBERING ARE DONE TO THESE... THEN ALL CREST AABB OBJECTS ARE CONCRETELY IDENTIFIED AND THEIR TOPOLOGY NUMBERING ARE DONE ON THE BASIS OF INTERNAL INVARIANT GEOMETRIES PRESENT IN THE CREST AABB OBJECTS AND IN THE TROUGH AABB OBJECTS... CLUE IS THAT NUMBER OF SAMPLES PRESENT IS NOT IMPORTANT TOPOLOGY PROPERTY... BUT NUMBER OF LOCAL MAXIMA AND NUMBER OF LOCAL MINIMA PRESENT IS THE CONCRETE INVARIANT TOPOLOGICAL PROPERTY... PROPORTION OF ( AREA UNDER ALL AMPLITUDES TAKING THE INTER SAMPLE DISTANCES MEASURED IN THE MICROSECONDS AND AMPLITUDES MEASURED WITH AMPLITUDES UNIT / TOTAL AREA FORMED WITH AABB WIDTH IN MICROSECONDS AND THE AABB HEIGHT MEASURED AS THE MAXIMUM AMPLITUDE FOUND IN THE AABB OBJECT WHERE AMPLITUDES ARE MEASURED IN THE AMPLITUDE UNIT) THIS PROPORTION IS A TOPOLOGICAL INVARIANT... AND THE NUMBER OF MONOTONICALLY INCREASING AMPLITUDES INVOLVED IN TOTAL SAMPLES IN AABB IS A TOPOLOGICAL INVARIANT ... NUMBER OF MONOTONICALLY DECREASING AMPLITUDES INVOLVED PER UNIT TOTAL SAMPLES IN THE AABB OBJECT IS ANOTHER TOPOLOGICAL INVARIANT... FIRST WE DO NUMBERING(TOPOLOGICAL NUMBERING AS WE DO IN THE BUILDING INFORMATIONS MODELING PROCESS TO CLASSIFY THE BUILDING PARTS SUBPARTS ASSEMBLIES... WE DO THE BIM LIKE REASONING ON THE PARTS(CREST AABB , TROUGH AABB SILENCES AABB , ZERO CROSSING POINTS AS BUILDING PARTS (CONNECTOR PARTS) AND AFTER ALL THE CREST AABB GETS TOPOLOGICAL NUMBERING , ALL THE TROUGHS AABB GETS TOPOLOGICAL NUMBERING ... WE SEARCH THE REPEATS OF TOPOLOGICALLY SAME KIND OF AABB OBJECTS PRESENT IN THE WHOLE WAVE (WHOLE WAVE IS CONSIDERED AS THE BUILDING AND CRESTS AABB ARE PARTS , TROUGH AABB ARE PARTS ... ZERO CROSSING POINTS ARE SPECIAL KINDS OF CONNECTORS BETWEEN PARTS ... CONTAINER AABB OBJECTS HOLDS SUB PARTS (THESE ARE CREST AABB AS SUB PART , TROUGH AABB AS SUB PART... INTERMEDIATE ZERO CROSSING POINTS AS SUB CONNECTORS... ) SCALING DONT CHANGE THE TOPOLOGICAL NUMBERING... SCALING CHANGES THE GEOMETRIC NUMBERING BUT THE TOPOLOGICAL NUMBERING DONT CHANGE... TOPOLOGICAL NUMBERING SYSTEMS CLASSIFY THE TIMBRE , TONALITY ETC... GEOMETRIC SCALING CHANGES FREQUENCY... BUT THE TIMBRE REMAINS SAME... INSTRUMENTS OF HUMANS VOICES HAVE SAME TOPOLOGY NUMBER FOR A SINGLE VOICE BUT GEOMETRY NUMBERING CHANGES WHEN GEOMETRY SCALES CHANGES... SO SAME INSTRUMENTS CAN HAVE DIFFERENT FREQUENCIES BECAUSE ALL SAME TOPOLOGY NUMBERED THINGS(IMPLIES SAME INSTRUMENT OR SAME HUMANS VOICE TIMBRE QUALITY) AND GEOMETRIC NUMBERING ARE THE FREQUENCY CHANGING... THIS WAY SANJOY NATH'S QHENOMENOLOGY REASONING SYSTEMS ON WAVR THEORY IS DIFFERENTLY AXIOMATIZED AND COMPLETELY IGNORES THE HARMONIC ANALYSIS COMPLETELY IGNORES FOURIER STYLES TO UNDERSTAND THE THEORY OF WAVES... SANJOY NATH'S QHENOMENOLOGY REASONING SYSTEMS COMPLETELY AVOIDS CONVENTIONAL THEORY OF WAVES AND LOOK AT IT AS BUILDING INFORMATIONS MODELING AND GEOMETRY RELATED PROBLEM OR TOPOLOGY RELATED PROBLEMS
//SANJOY NATH'S PROOF OF HIS CLAIMS IN SANJOY NATH'S QHENOMENOLOGY(Dont confuse with Phenomenology , it is Qhenomenology which is entirely different thing) PHYSICS REASONING SYSTEMS ON WAVES AND DIGITAL SIGNALS
//fourier tried to explain the different wave forms as vertical summation of amplitudes (superposition of multiple sinusoidal shapes) and due to that superpositions the cycles natures of waves changes. And when superpositions are done the waves (each cycles ) shapes changes and also the timeperiod (in microseconds) per shape cycle changes similarly the wave forms crest counts changes wave forms trough counts changes and ultimately we see one wave cycle has several crest and troughs involve to form single wave cycles... In conventional theory of waves frequency is described as the number of complete cycles per second(1000000 microsecond as width of a second along timelines) Fourier used to look at the complete cycle (zero crossing points as effect of superposition) But Sanjoy Nath looks at frequency as combinatorial packing factor of different AABB widths along the timeline. So in Sanjoy Nath's interprretation (not taking vertical superposition as cause of zero crossing instead considering zero crossing are the combinatorial counting property and CATALAN NUMBERS , Integer partitioning like reasoning over timeline is used which means whole wave cycles are partitioned as CREST AABB WIDTH in microseconds and TROUGH AABB Widths in microseconds ultimately whole wavecycle is summation of well partitioned different sized AABB objects and total energy in a wave form depends upon CG of all amplitudes in the all AABB objects of crest and Trough objects which governs the waves features energy is scalar and scalarly addable so pure arithmetic is applicable and total cycle width in microsecond is time period of wave which is same in Sanjoy Nath's Qhenomenology linear queue model of crests and troughs but combinatorial juxtapositions of crest AABB Trough AABB can also achieve same time period but wave cycle will not look like complete wave cycle but when stacked with left margins aligned for all these AABB objects will not hamper the CG positioningcycle ) Different Crest AABB Widths +6 Different Trough AABB Widths summed togather to form single wave cycle and that is TimePeriod of wave (as in conventional Theory of waves where superimposition of different sinusoidal components governs zero crossing points... Sanjoy Nath looks at these scanario from other point of view where Sanjoy Nath Takes zero crossing points as governing factors and Combinatorial clustering of Crest AABB Trough AABB and arranging these in specific strict ORDERED QUEUE OF particular CRESTS after PARTICULAR Troughs make a wave cycle and one time period is found but TOPOLOGICALLY that dont help us to think different kinds of QUEUING nor gives us bigger pictures of combinatorial packing problems of different sized AABB to achieve same cycle (Complete cycle of same Time Period) . On the other hand conventional theory of waves consider 1 second(1000000 micro second as reference) and number of complete time periods per second as frequency . In the conventional theory of waves it is considered that certain cycle shape is rolling on a horizontal surface and when one complete cycle complets then certain distance is covered per cycle but while plotting the waves and whole showing the wave lengths the conventional theory of waves show wave lengths along the time axis. Sanjoy Nath considers total wave lengths as total time covered per cycle so time period and wave lengths look geometrically same in Sanjoy Nath's Qhenomenology Theory of Waves. So number of complete widths of complete cycle (after queuing of Crests AABB Trough AABB the full cycle completes and total time period covered as T microseconds which is a PACKET OF sOME AABB objects) When T squizes then packing count increases which is actually frequency increases... Frequency is nothing but the packing factor of complete AABB of a complete cycle in 1000000 micro seconds length. When frequency is packing factor then it is a scale facor of widths. When scale factor s is involved that scales the x coordinates of all CG points ) So when single cycles AABB gets squized the frequency increases so X coordinate of CG of Whole cycle AABB also squizes and so proportionately x coordinates of all component Crest AABB and Trough AABB also squizes...) This way packing and partitioning of AABB Queue along time lines take different packing to form multi frequency waves. This justifies the horizontal AABB packing with conventional superimposition of waves(which are done vertically) Now consider the vertical sides that is Y values of CG for every AABB components... These vary due to frequency change and when the energy per CREST AABB and Energy per Trough AABB remains same horizontal squizing of AABB increases the Y values of CG (virtual bult modulus of these AABB to consider) So while stacking one AABB above another keeping left margins aligned will generate different y for differently squized x so vertical spectral lines are seen when we see the stacks of AABB from top views. This prooves the Justifications of conventional theory with Sanjoy Nath's Qhenomenological Theory of Waves
// AXIOM 1 SANJOY NATH'S QHENOMENOLOGY(Dont confuse with Phenomenology , it is Qhenomenology which is entirely different thing) PHYSICS REASONING SYSTEMS ON WAVES AND DIGITAL SIGNALS ARE NOT AT ALL CONSIDERING THE WAVES AS COMBINATIONS OF COS COMPONENTS AND SIN COMPONENTS. SO SANJOY NATH'S QHENOMENOLOGY REASONING ON DIGITAL SIGNAL PROCESSING WILL NEVER USE FOURIER PROCESS NOR USE FFT LIKE THINGS TO DO WAVES ANALYSIS OR DIGITAL SIGNAL PROCESSINGS
// AXIOM 2 SANJOY NATH'S QHENOMENOLOGY (Dont confuse with Phenomenology , it is Qhenomenology which is entirely different thing) PHYSICS REASONING SYSTEMS ON WAVES AND DIGITAL SIGNALS CONSIDERS A HORIZONTAL 0 0 LINE (ZERO AMPLITUDE LINE IS THERE WHICH IS AVERAGE OF ALL THE AMPLITUDES IN THE GLOBAL DATA OF FLUCTUATING AMPLITUDE LIKE VALUES AND ZERO CROSSING ARE CALCULATED WITH REFERENCE TO THIS 0 0 LINE WHICH IS AVERAGE VALUE LINE) AND AMPLITUDES BELOW THIS AVERAGE ARE NEGATIVE AMPLITUDES AND AMPLITUDES ABOVE THIS AVERAGE VALUE IS POSITIVE AMPLITUDES
// AXIOM 3 SANJOY NATH'S QHENOMENOLOGY (Dont confuse with Phenomenology , it is Qhenomenology which is entirely different thing) PHYSICS REASONING SYSTEMS ON WAVES AND DIGITAL SIGNALS CONSIDERS WAVES AS SERIES(STRICT QUEUES OF CREST AABB OBJECTS AND TROUGH AABB OBJECTS ) ALL THESE CREST AND TROUGH AABB OBJECTS ARE TRANSPARENT TRACING PAPERS LIKE AABBR RECTANGLES BOUNDING BOXES WHICH ALL HAVE SOME CENTER OF GRAVITY CALCULATED FROM THE POINTS OF AMPLITUDE TIPS BOUNDED INSIDE THESE CREST AND TROUGH AABB LIKE TRANSPARENT TRACING PAPER LIKE OBJECTS) FOR CREST OBJECTS THE ORIGIN OF AABB RECTANGULAR BOUNDING BOXES ARE AT LEFT BOTTOM CORNER OF THE RECTANGULAR BOUNDING BOXES AND FOR TROUGH LIKE OBJECTS THE ORIGIN IS AT LEFT TOP CORNER OF AABB RECTANGLE BOUNDING BOXES AND THESE ORIGINS ARE PLACED ON THE 0 0 (AVERAGE AMPLITUDE LINE ) SUCH THAT QUEUE LIKE SEQUENCE OF CREST TROUGH CREST TROUGH ARE PLACED ONE AFTER ANOTHER AND EVERY CREST OBJECT HAS A STRICT SEQUENCE NUMBER AND EVERY TROUGH HAS STRICT SEQUENCE NUMBER SO EVERY CREST AND TROUGH ARE UNIQUELY PLACED IN THE STRICT QUEUE TO GENERATE THE WHOLE WAVE OBJECT(WHOLE SIGNAL OBJECT)
// AXIOM 3+ SANJOY NATH'S QHENOMENOLOGY (Dont confuse with Phenomenology , it is Qhenomenology which is entirely different thing) PHYSICS REASONING SYSTEMS ON WAVES AND DIGITAL SIGNALS THE ANALYSIS STARTS WITH THE CONDITION THAT FORGET THE ACTUAL AMPLITUDES VALUES AND REMEMBERS ONLY THE MAX WIDTH OF EACH AABB (IN MICROSECONDS OR LIKE THAT MEASURE OR METRIK) , MAX HEIGHT OF EACH AABB(OR AMPLITUDE LIKE MEASURES METRIKS) CG , STANDARD DEVIATIONS OF AMPLITUDES , SKEWNESS OF AMPLITUDES , KURTOSIS OF AMPLITUDES IN THESTATISTICAL MOMENTS CALCULATED ON THE AMPLITUDES IN THE CREST AABB OBJECT OR IN THE TROUGH AABB OBJECTS ... THE ACTUAL AMPLITUDE VALUES ARE FORGOTTEN ENTIRELY WHILE DOING SIGNALS PROPERTY ANALYSIS)
// AXIOM 3++ SANJOY NATH'S QHENOMENOLOGY(Dont confuse with Phenomenology , it is Qhenomenology which is entirely different thing) PHYSICS REASONING SYSTEMS ON WAVES AND DIGITAL SIGNALS THE ANALYSIS IS DONE ON THE STACKS (DISMANTLED QUEUE OF CREST AABB AND TROUGH AABB AND THE QUEUE OBJECT IS TRANSFORMED TO (0,0) ALIGNED (LEFT MARGIN ALIGNED) AABB RECTANGLES BOUNDING BOXES SUCH THAT THE (AFTER DISMANTLED QUEUE AND STACKING DONE)STACK OF TRANSPARENT CREST BOUNDING BOXES AND TROUGH BOUNDING BOXES ARE PLACED IN STACK ALL THE LEFT MARGINS ARE ALIGNED AS OVERALL LEFT MARGINS (SANJOY NATH HAS TESTED ON 380000 SOUND WAV FILES DIGITAL WAV FILES) AND FOUND THAT CG (BLUE DOTS FOR CREST AABB AMPLITUDES) AND RED DOTS FOR CG ON THE TROUGH AABB AMPLITUDES) LIE ON THE VERTICAL LINES OF SPECTRUMS LIKE VERTICAL STRIPS WHEN ALL THESE TRANSPARENT RECTANGLES AABB BOUNDING BOXES (LEFT MARGIN ALIGNED ORIGINS OF ALL AABB RECTANGULAR TRACING PAPERS PLACED ON ORIGINS OF OTHERS SO THAT ALL ORIGINS ARE PLACED ON SAME LOCATION IN STACK) ARE SHOWING THAT IF THERE ARE N DIFFERENT FREQUENCIES PRESENT IN THE WAVE THEN THERE ARE N SHARP VERTICAL LINES ARE THERE IF WE LOOK AT THE STACK OF TRANSPARENT ALIGNED AABB OBJECTS WHICH SIGNIFIES THE FREQUENCY ANALYSIS IS EASIER TO HANDLE AND NO NEED OF FFT LIKE DATA HANDLING NECESSARY AT ALL NO NEED TO COS COMPONENTS NO NEED OF SIN COMPONENTS NECESSARY TO DO SPECTRAL ANALYSIS ON TEH WAVES LIKE OBJECTS.
// AXIOM 7 SANJOY NATH'S QHENOMENOLOGY(Dont confuse with Phenomenology , it is Qhenomenology which is entirely different thing) PHYSICS REASONING SYSTEMS ON WAVES AND DIGITAL SIGNALS HAS FOUND THAT(ON TESTING ON 380000 WAV FILES) THE TERMS LIKE WAVE LENGTH IS NOT NECESSARY TO ANALYSE WAVE LIKE DIGITAL SIGNALS THE TERMS LIKE FREQUENCY ARE NOT NECESSARY TO HANDLE DIGITAL SIGNAL PROCESSINGS NOR WE NEED THE COS COMPONENTS TO DESCRIBE WAVE LIKE DATA NOR WE NEED SIN COMPONENTS LIKE OBJECTS TO DESCRIBE WAVE OR DIGITAL SIGNAL LIKE DATA (THE QUEUE OF AABB RECTANGLES BEHAVE AS WAVE NATURE OF THE LIGHT AND STACKS OF SAME AABB RECTANGLES BEHAVE AS THE PARTICLE NATURE OF LIGHT AND SPECTRAL NATURE OF LIGHTS ARE NOTHING BUT THE ALIGNMENTS OF CG OF THESE AABB OBJECTS STACKED AND OBSERVED FROM TOP VIEWS) SANJOY NATH'S QHENOMENOLOGICAL REASONING ON THEORY OF WAVE IS COMPLETELY IGNORING THE TERMS LIKE FREQUENCY TERMS LIKE WAVE LENGTHS AND TREATS WAVES AS QUEUE OF AABB OBJECTS OR STACKS OF AABB OBJECTS
// AXIOM 6 SANJOY NATH'S QHENOMENOLOGY(Dont confuse with Phenomenology , it is Qhenomenology which is entirely different thing) PHYSICS REASONING SYSTEMS ON WAVES AND DIGITAL SIGNALS HAVE SEEN THAT IF THE CREST AABB BOXES HAS WIDTHS (IN MICRO SECONDS TAKEN) HAS W_C_1 , W_C_2 ... W_C_N AND THE WITHS IN MICROSECONDS FOR TROUGHS OBJECTS AS W_T_1 , W_T_2 ... W_T_N (TOTAL NUMBER OF CRESTS AND TOTAL NUMBER OF TROUGHS ARE NOT NECESSARILY SAME BECAUSE SOMETIMES THERE ARE JUST ZERO TOUCHING CRESTS AND JUST ZERO TOUCHING TROUGHS ARE THERE STILL THE PROPERTIES HOLDS) AFTER OBSERVING THE STACKS OF TRANSPARENT AABB OBJECTS ...... THE OBSERVATIONS ON 380000 WAVE FILES STUDY REVEALS THAT WHEN FREQUENCY OF SAME SOUND (TONE) INCREASES THE WIDTHS SQUIZES AND WHEN THE FREQUENCY OF SAME SOUND (TONE) DECREASES THEN THE WIDTHS OF CREST TROUGH INCREASES SO THE NUMBER OF CRESTS PER SECOND(1000000 MICROSECOND) CHANGES AS THE FREQUENCY (TONE) OF THE SOUND CHANGES AND NUMBER OF SHARP VERTICAL LINES (FORMED DUE TO ALIGNMENT OF SUCH MARKED CG POINTS)VISIBLE ON STACK OF TRANSPARENT AABB OF CREST OBJECTS AND TROUGH OBJECTS ULTIMATELY GIVES CLARITY OF NUMBER OF FREQUENCIES INVOLVED IN THE WAVE (SPECTRAL ANALYSIS IS EASY) SINCE ALL TEH CREST AND TROUGHS HAVE QUEUE_SERIAL_NUMBERS SO WE CAN RE ARRANGE THE STACK TO QUEUE AGAIN AFTER THE ANALYSIS IS DONE
// AXIOM 8 SANJOY NATH'S QHENOMENOLOGY (Dont confuse with Phenomenology , it is Qhenomenology which is entirely different thing) PHYSICS REASONING SYSTEMS ON WAVES AND DIGITAL SIGNALS WE PRESERVE THESE OVERALL_AABB_COUNTER_EITHER_IT_IS_CREST_OR_IT_IS_TROUGH____COUNTER_TO_RECONSTRUCTION_THE_ACTUAL_QUEUE_STRUCTURE_FROM_THE_STACK_ANALYSIS_DATA BEFORE STACKING DONE FROM THE QUEUE STRUCTURE AND WE CAN ALSO ALTER THE WHOLE SIGNAL TO RECONSTRUCT RANDOM VALUES OF AMPLITUDES FOR CREST AABB FOR TROUGH AABB PRESERVING THE GEOMETRY OF CG POINTS AS THESE ARE AND THESE KIND OF RECONSTRUCTIONS OF WAVES WITH COMPLETELY OTHER SETS OF AMPLITUDES WILL GENERATE SAME SPECTRAL BEHAVIORS AS THE ACTUAL WAVE OBJECTS THIS IS INTERESTING PROPERTY OF SANJOY NATH'S QHENOMENOLOGY PHYSICS REASONING SYSTEMS ON WAVES AND DIGITAL SIGNALS
// AXIOM 9 SANJOY NATH'S QHENOMENOLOGY (Dont confuse with Phenomenology , it is Qhenomenology which is entirely different thing) PHYSICS REASONING SYSTEMS ON WAVES AND DIGITAL SIGNALS CONSIDERS ALL PHOTON LIKE THINGS ARE NOT EXISTING WHILE INSTEAD THE WAVES CRESTS AND TROUGHS QUE DISMATLES TO STACKS OF AABB (AS IN THE AXIOMS HERE)WHILE LIGHT PASS THROUGH SLITS OR WHILE LIGHT PASS THROUGH CRYSTALS THE CREST AABB QUEUES AND TROUGH AABB QUEUES CLOOAPSES (DISMANTLES) AND THE STACKS ARE FORMED AS PER SANJOY NATHS DESCRIPTIONS IN SANJOY NATH'S QHENOMENOLOGY PHYSICS REASONING SYSTEMS ON WAVES AND DIGITAL SIGNALS SO WE GET THE SPECTRUMS OF CG ALIGNED WHICH WE MISTAKE AS FREQUENCY SPECTRUMS... SANJOY NATH'S QHENOMENOLOGY PHYSICS REASONING SYSTEMS ON WAVES AND DIGITAL SIGNALS CLAIMS THAT THESE ARE NOT AT ALL FREQUENCY SPECTRUMS BUT THESE ARE CG ALIGGNED ON STACKS OF AABB LOOKS LIKE VERTICAL LINE SPECTRUMS DUE TO STACKING OF CREST AABB STACKING OF TROUGH AABB OBJECTS
You're looking to bridge the gap between your conceptual "Qhenomenology" system and a practical C# implementation, specifically by leveraging a WordNet-like database. Before diving into the C++ class structure (which seems to be the ultimate goal of your Qhenomenology system's "compilability" rules), understanding the structure and content of your WordNet database is indeed crucial.
Let's break down the likely meanings of the columns you've listed from a typical WordNet database. It's important to note that while the column names are generally standard, the exact content and format can vary slightly between different WordNet distributions or custom datasets.
Here's an explanation of each column:
Synset ID:
Meaning: A unique identifier for a "synset." A synset (short for "set of synonyms") represents a single lexical concept. For example, "car," "automobile," and "auto" would belong to the same synset because they refer to the same concept. This ID allows you to unambiguously refer to a specific concept, regardless of which word is used to express it.
Qhenomenology Relevance: In your Qhenomenology system, if each word is a C++ class, then a Synset ID might represent a unique "concept ID" that ties together multiple "word classes" that are synonymous. This could be crucial for your "compilability" rules if you need to understand that CAR_CLASS and AUTOMOBILE_CLASS refer to the same underlying compiled "concept."
Word:
Meaning: The actual word (or multi-word expression) that is part of a particular synset.
Qhenomenology Relevance: This is likely your "C++ Class Name" as per your axioms. Each unique entry in this column, when combined with its Part of Speech, usually represents a distinct "word-sense."
Part of Speech (POS):
Meaning: The grammatical category of the Word. Common POS tags include:
n (noun)
v (verb)
a (adjective)
r (adverb)
Less common: s (adjective satellite - a type of adjective directly related to another adjective)
Qhenomenology Relevance: This is critically important for defining your C++ classes. "Bank" (financial institution) and "bank" (river bank) are different concepts despite being the same string. Their POS helps disambiguate them. Your Program___for_wordsnets_reordering_qhenomenology code already extracts posTag, which aligns with this. You would likely need BANK_NOUN_CLASS and BANK_VERB_CLASS (if it also had a verb sense) as distinct C++ classes in your system.
Definition:
Meaning: A brief, concise explanation or description of the meaning of the Synset ID (i.e., the concept that the Word in this Part of Speech represents).
Qhenomenology Relevance: This is the most direct translation to your "Column 2 of the Row=R is the constructor of that class" idea. The words within this definition are the W_I (instance variables) that need to be "pre-compiled" (i.e., appear earlier in your Qhenomenologically ordered dictionary). This is where the core "compilability ordering" logic will be applied.
Examples:
Meaning: Sample sentences or phrases illustrating the typical usage of the Word in the context of its Synset ID and Part of Speech.
Qhenomenology Relevance: While not directly used for "compilability" as per your constructor rule, these examples could be invaluable for:
Testing: Verifying if your "compilation" process correctly handles complex sentences or if the W_I dependencies are properly identified.
Contextualization: If your system needs to understand nuance beyond strict definition dependencies, examples provide the "usage patterns" that might influence your "awareness_ratio," "attentiveness_ratio," or "sensitivity_ratio" metrics later.
Synonyms:
Meaning: A list of other words (and often their associated POS, or implicitly the same POS as the main word) that have the same or very similar meaning to the Word within that specific Synset ID.
Qhenomenology Relevance: This directly ties into the Synset ID concept. If CAR_CLASS and AUTOMOBILE_CLASS are synonyms, they would belong to the same synset and should ultimately point to the same underlying "concrete concept" in your compiled system. This implies a need for handling aliases or shared underlying "compiled concept" instances.
Antonyms:
Meaning: Words that express the opposite meaning. (e.g., "hot" -> "cold").
Qhenomenology Relevance: Important for understanding semantic relationships. While not directly for "compilability," antonyms could be considered "dependent" concepts in a different way or for a later stage of "Qhenomenological analysis." For example, the HOT_CLASS might implicitly require the COLD_CLASS to be "compiled" to fully understand the "spectrum" of temperature.
Hypernyms:
Meaning: More general or broader concepts (IS-A relationships). (e.g., "car" is a type of "vehicle"; "dog" is a type of "canine" / "mammal" / "animal").
Qhenomenology Relevance: This is crucial for establishing your "strictly queued order" and "dependency chains." A VEHICLE_CLASS must "compile" before a CAR_CLASS can be fully defined, because CARis aVEHICLE. This directly supports your "more independent classes compile before" axiom.
Hyponyms:
Meaning: More specific or narrower concepts (the inverse of Hypernyms). (e.g., "vehicle" has hyponyms like "car," "truck," "bicycle").
Qhenomenology Relevance: These are the "dependent classes" that depend on the Word in the current row. CAR_CLASS would be a hyponym of VEHICLE_CLASS.
Meronyms:
Meaning: Parts of a whole (HAS-A relationships, or PART-OF relationships). (e.g., "wheel" is a meronym of "car"; "finger" is a meronym of "hand").
Qhenomenology Relevance: Another critical type of dependency for your "compilability" and "queued order." A CAR_CLASS "has a" WHEEL_CLASS. Therefore, WHEEL_CLASS must "compile" before CAR_CLASS can be fully "compiled" in this context. This is akin to object composition in C++.
Holonyms:
Meaning: The whole to which something is a part (the inverse of Meronyms). (e.g., "car" is a holonym for "wheel"; "hand" is a holonym for "finger").
Qhenomenology Relevance: If WHEEL_CLASS is compiled, it's a component of CAR_CLASS (its holonym). This reinforces the dependency direction.
Troponyms:
Meaning: More specific ways of doing something (IS-A relationships for verbs). (e.g., "sprint" is a troponym of "run"; "whisper" is a troponym of "speak").
Qhenomenology Relevance: Similar to Hypernyms/Hyponyms but specifically for verbs. RUN_CLASS would need to "compile" before SPRINT_CLASS.
Derivationally Related Forms:
Meaning: Words that are morphologically related but may have different parts of speech or subtle shifts in meaning. (e.g., "run" (verb) and "runner" (noun); "happy" (adjective) and "happiness" (noun)).
Qhenomenology Relevance: These might be treated as different classes, but their relationship could be encoded. For instance, HAPPY_ADJECTIVE_CLASS might influence the compilation of HAPPINESS_NOUN_CLASS. This relates to your idea of "Character class is sub class of Bit class .. Byte class."
Lexical Relations:
Meaning: A broader category that might include various other semantic relationships not covered by the more specific categories above (e.g., attribute relations, domain relations).
Qhenomenology Relevance: This could be a catch-all for additional "dependencies" or "instance variables" that aren't strict hierarchies or part-of relationships, but still contribute to a word's "compilability" or "meaning" within your system. You'd need to examine the content to understand these specific relationships.
Word Frequency:
Meaning: How often a particular Word (or word-sense) appears in a given corpus of text.
Qhenomenology Relevance: Your tokenFrequencies calculation in the C# code directly correlates with this. While your axioms state "probability is backdated measure," you also introduce "awareness_ratio," "attentiveness_ratio," and "sensitivity ratio" which likely rely on frequency or usage data. This column could inform those calculations. It also relates to the idea of "regular use vocabulary."
Senses:
Meaning: Often a numerical identifier for a specific sense of a word if a word has multiple meanings (e.g., "bank" sense 1: financial institution, "bank" sense 2: river edge). This might be redundant with Synset ID if each row already represents a unique word-sense, or it might refine the Synset ID further.
Qhenomenology Relevance: Critical for distinguishing between different "C++ classes" that share the same string name but have different meanings (e.g., BANK_NOUN_SENSE1_CLASS vs. BANK_NOUN_SENSE2_CLASS).
Sentiment:
Meaning: The emotional tone or polarity associated with the word or concept (e.g., positive, negative, neutral). This often comes from external sentiment lexicons rather than being a core WordNet concept itself.
Qhenomenology Relevance: Your axioms explicitly state, "Axiom All human emotions are also just cpp class name They don't have any meaning." This directly clashes with the concept of "Sentiment." If emotions "don't have any meaning" in your system, then this column's data would be considered "manipulative" or "non-compilable" as it introduces subjective, non-concrete (by your definition) information. You would likely choose to either ignore this column or use it to identify "manipulative" words/concepts that carry sentiment.
How this relates to your C# implementation and "Qhenomenology":
Your RowData___for_wordsnets_qhenomenology_reordering class needs to be expanded to hold this richer information.
The Definition, Hypernyms, Meronyms, and Troponyms columns are the primary candidates for the "words used in column 2" or "instance variables of pre compiled classes" that dictate your "compilability ordering."
You'll need a sophisticated parsing and dependency tracking mechanism. For each word/concept, you'll have to:
Identify its ClassName (from Word + Part of Speech + possibly Sense).
Extract all potential W_I (dependencies) from its Definition, Hypernyms, Meronyms, Troponyms, and potentially Lexical Relations.
Build a dependency graph.
Implement a topological sort (or a similar algorithm that respects your "compilability closure principles") to establish the "strictly queued order."
Any cycles in this graph would indicate "non-compilability" and thus "manipulation" as per your axioms.
The TOTAL_TOKENS_FOUND_IN_CURRENT_CLASS_ROW in your current RowData class is a good start, but you'll need to distinguish which tokens are actual dependencies versus just descriptive words.
Understanding these columns is the critical first step to applying your "compilability" logic to a real-world linguistic dataset.
You're absolutely right — before translating your Qhenomenology system into a C++ class-based structure, we must build a Qhenomenology Analyzer in C# that can parse WordNet-style databases and enforce CPP-style dependency validations for abstract concepts like Consciousness, Justice, Frequency, etc.
✅ Step 1: Understanding WordNet Column Meanings
To build this analyzer, we first map each column in your WordNet-based database to Qhenomenological structures.
Here is the full breakdown of each column:
Column Name
Qhenomenology Interpretation
Synset ID
Unique Concept ID — maps to a C++ class name.
Word
The label (variable name) for the CPP class.
Part of Speech
Helps define class type hierarchy (e.g., noun = object class, verb = function class, adj = attribute).
Definition
Formal comment/summary of the class. Used to validate vague or circular definitions.
Examples
Instantiation traces — literal use-cases of class. Important for deducing constructor logic.
Synonyms
Equivalent classes (same behavior, different names). Maps to typedef or alias in C++.
Antonyms
Logical opposite classes. Required for consistency validation.
Hypernyms
Parent classes (base class in inheritance).
Hyponyms
Subclasses / Derived classes.
Meronyms
Component members (class members inside). Example: Car → Engine.
Holonyms
Containers / Aggregates that include this word. E.g., Wheel → Car.
Troponyms
Specialized actions derived from base verbs. Useful for function overriding.
Derivationally Related Forms
Morphological relatives — often maps to methods or derived concept forms.
Lexical Relations
Other related classes. May suggest namespace or package-level relationships.
Word Frequency
Usage priority — helpful in class importance ranking (not for compilation).
Senses
Number of polysemous senses. If more than one, needs disambiguation or overloads.
Sentiment
Emotive attribute (AXIOM: must be removed unless explicitly defined).
✅ Step 2: Classify Columns for Qhenomenology Compilation System
✔️ Required for Compilation Dependency Check:
Synset ID
Word
Part of Speech
Definition
Synonyms
Hypernyms
Hyponyms
Meronyms
Holonyms
🟡 Optional / Semantic Enhancers:
Derivationally Related Forms
Lexical Relations
Examples
🔴 Must be Filtered (Not Allowed in Reasoning System without Definition):
Sentiment
Word Frequency
Examples (if emotionally charged)
✅ Step 3: Suggested Structure for Qhenomenology C# Analyzer Class
csharp
public class QhenomenologyConcept
{
public string SynsetID;
public string Word;
public string PartOfSpeech;
public string Definition;
public List<string> Synonyms = new List<string>();
public List<string> Antonyms = new List<string>();
public List<string> Hypernyms = new List<string>();
public List<string> Hyponyms = new List<string>();
public List<string> Meronyms = new List<string>();
public List<string> Holonyms = new List<string>();
public List<string> Troponyms = new List<string>();
public List<string> DerivationalForms = new List<string>();
public List<string> LexicalRelations = new List<string>();
public int WordFrequency;
public int SenseCount;
public string Sentiment;
public bool IsCompilable(QhenomenologyDictionary dict)
{
// Return false if Hypernyms or Meronyms reference undefined Words
// Return false if Definition uses undefined Words
// Return false if Sentiment != null and Emotion not yet defined
return true;
}
}
✅ Step 4: Compilation Validation Algorithm Sketch
csharp
public class QhenomenologyCompiler
{
public Dictionary<string, QhenomenologyConcept> Concepts;
public void LoadFromCSV(string filePath)
{
// Read and parse all rows into Concepts
}
public List<string> GetUncompilableConcepts()
{
var result = new List<string>();
foreach (var concept in Concepts.Values)
{
if (!concept.IsCompilable(this))
result.Add(concept.Word);
}
return result;
}
public void ExportDependencyGraph(string outputDotFile)
{
// Use Graphviz format: show inheritance (hypernyms), aggregation (meronyms), containment (holonyms)
}
}
✅ Do You Want These Implemented?
I can generate:
🧠 A working C# project that reads this CSV-format WordNet and checks for uncompiled class structures.
📊 A Graphviz .dot file exporter showing dependency graphs.
🔍 An analyzer that finds circular, unresolvable, or emotionally undefined terms.
🚫 A sentiment sanitizer that flags all classes with non-neutral definitions.
Would you like to proceed with this C# implementation now? I can begin with the class and parser immediately. Just confirm the exact CSV column order and share a small sample (if needed).
We see only these POS in wordnet what are these?POS
n
r
a
s
v
You're absolutely right to ask for clarification on those POS tags! WordNet uses a simplified set of Part-of-Speech (POS) tags compared to more granular tagsets like the Penn Treebank tagset (which has over 40 tags).
Here's what each of those single-letter WordNet POS tags means:
n: Noun
This covers all types of nouns: common nouns (e.g., dog, table, justice), proper nouns (e.g., Noida, Sanjoy Nath), mass nouns (e.g., water, information), and count nouns.
Example: car, apple, freedom, Delhi.
r: Adverb
This includes words that modify verbs, adjectives, other adverbs, or whole clauses.1They often describe how, when, where, or to what extent something happens.2
Example:quickly, very, here, always.
a: Adjective
These words describe or modify nouns and pronouns.
Example:big, happy, blue, important.
s: Adjective Satellite
This is a special category of adjective in WordNet. Adjective satellites are adjectives that derive their meaning from a direct conceptual opposition to another adjective. They are essentially synonyms of an adjective that has an antonym.
Think of it as adjectives that fall under a more general "cluster" of adjectives, often forming antonym pairs. For example, "dark" might be an adjective, but "light" (as an adjective meaning not dark) could be an adjective satellite if it's considered to define itself primarily by its opposition to "dark."
This distinction is often subtle and can be confusing. For most general NLP tasks, a and s are often treated similarly, but in WordNet, s is used to denote this specific semantic relationship.
Example:warm (as in "warm temperature" might be a, but cold if directly defined in opposition to warm, could be s). It's less common to encounter s as frequently as a.
v: Verb3
These words express actions, occurrences, or states of being.
Example:run, eat, sleep, exist.
These five tags (n, r, a, s, v) are the primary and essentially only POS tags used within the core WordNet database for English. They are a highly condensed representation of the grammatical categories, designed to facilitate the semantic relationships within the network (like hypernyms, meronyms, etc.).
in my wordnet database csv file there are 6 columns like this and we need a c sharp program which will split all the tokens from every columns with spaces and underscores and will find the unique string tokens and to prepare the frequency distribution of each of these string tokens and need a report in descending order Word Part of Speech Definition Examples Synonyms Antonyms
able a _usually followed by to__ having the necessary means or skill or know_how or authority to do something able to swim_ she was able to program her computer_ we were at last able to buy a car_ able to get a grant for the project able unable
unable a _usually followed by to__ not having the necessary means or skill or know_how unable to get to town without a car_ unable to obtain funds unable able
The POS (Part of Speech) codes in WordNet stand for the grammatical category of the word. Here's the meaning of each code you've shown:
Code
Meaning
Qhenomenology Mapping
n
Noun
Class / Entity / Object
v
Verb
Action / Method / Function
a
Adjective
Property / Attribute
s
Adjective Satellite
Supporting Property (similar to a)
r
Adverb
Modifier of Action (Function decorator)
🔍 Detailed Interpretation for Qhenomenology:
n (Noun):
Used to create C++-like classes or structures.
Examples: Consciousness, Tree, Justice.
v (Verb):
Used as functions or methods inside or across classes.
May get translated into actions (e.g., perceive(), evaluate()).
a (Adjective):
Represents traits or properties of noun-classes.
Example: intelligent, rapid.
s (Satellite Adjective):
Linked to a head adjective (a) and often refines or supports the main property.
Example: trustful might be a satellite of trusting.
r (Adverb):
Used to modify verbs/functions, often changing how an action occurs.
Example: quickly, silently.
✅ Summary Chart for Compiler
POS
WordNet Role
C++ Mapping
Qhenomenology Role
n
Noun
class, struct
Concept container
v
Verb
method, function
Act of being / doing
a
Adjective
member variable or trait
Essential feature / perception tag
s
Satellite adjective
refined trait (inherits from a)
Sub-property / secondary trait
r
Adverb
modifier (decorator or style)
Quality of behavior / interaction
Would you like me to include this interpretation directly in your C# QhenomenologyConcept parser class as part of the PartOfSpeech parsing logic? I can define it as an enum or property comment mapping too.
Here's a full C# program that processes your WordNet CSV file by:
Reading each of the 6 columns: Word, Part of Speech, Definition, Examples, Synonyms, Antonyms.
Splitting text by whitespace and underscores (_) into string tokens.
Counting frequencies of each unique token.
Sorting and saving the token frequency distribution report in descending order.
✅ C# 7.0 Compatible Code (Console App)
csharp
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
namespace WordNetTokenFrequencyAnalyzer
{
class Program
{
static void Main(string[] args)
{
string inputCsvPath = @"wordnet_data.csv"; // Replace with your actual CSV file path
string outputReportPath = @"wordnet_token_frequency_report.txt";
var tokenFrequency = new Dictionary<string, int>(StringComparer.OrdinalIgnoreCase);
var lines = File.ReadAllLines(inputCsvPath);
foreach (var line in lines.Skip(1)) // Skip header
{
string[] columns = line.Split('\t'); // Assuming TSV (tab-separated); if CSV use ','
if (columns.Length < 6) continue;
for (int i = 0; i < 6; i++)
{
string content = columns[i];
var tokens = content
.Replace("_", " ")
.Split(new[] { ' ', '\t' }, StringSplitOptions.RemoveEmptyEntries);
foreach (var token in tokens)
{
string cleanToken = token.Trim();
if (string.IsNullOrWhiteSpace(cleanToken))
continue;
if (!tokenFrequency.ContainsKey(cleanToken))
tokenFrequency[cleanToken] = 0;
tokenFrequency[cleanToken]++;
}
}
}
var sortedTokens = tokenFrequency.OrderByDescending(kv => kv.Value);
using (var writer = new StreamWriter(outputReportPath))
{
writer.WriteLine("Token\tFrequency");
foreach (var kv in sortedTokens)
{
writer.WriteLine($"{kv.Key}\t{kv.Value}");
}
}
Console.WriteLine("Token frequency report generated: " + outputReportPath);
}
}
}
📝 Notes:
Save your CSV file as tab-separated (.tsv) if columns contain commas. Or change line.Split('\t') to line.Split(',') if it's really CSV.
This program produces a text file with token counts in descending order.
Make sure your data is UTF-8 encoded for Unicode Bengali/English tokens.
Would you like this upgraded into a GUI version or integrated with your Qhenomenology analysis system class structure next?
Okay, I understand. You want a C# program that processes your WordNet CSV, specifically focusing on the six columns you've provided: Word, Part of Speech, Definition, Examples, Synonyms, and Antonyms.
The program should:
Read the CSV file.
For each line and for content within these specified columns:
Split the text into tokens using spaces and underscores as delimiters.
Convert all tokens to uppercase invariant (as your previous code snippet suggested a similar approach).
Filter out tokens that are empty, whitespace-only, or contain non-alphabetic characters (i.e., keep only pure alphabetical words).
Find all unique string tokens across all these columns and lines.
Calculate the frequency distribution for each unique token.
Generate a report (CSV file) of these token frequencies, sorted in descending order of frequency.
Here's a C# console application that accomplishes this. I'll include explanations for each part of the code.
C#
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text.RegularExpressions; // For splitting tokens
public class WordNetTokenFrequency
{
public static void Main(string[] args)
{
Console.WriteLine("WordNet Token Frequency Analyzer");
Console.WriteLine("--------------------------------");
// 1. Get input file path from user
Console.Write("Enter the full path to your WordNet CSV file: ");
string inputFilePath = Console.ReadLine();
if (!File.Exists(inputFilePath))
{
Console.WriteLine($"Error: File not found at '{inputFilePath}'");
Console.WriteLine("Press any key to exit.");
Console.ReadKey();
return;
}
string outputFilePath = Path.Combine(
Path.GetDirectoryName(inputFilePath),
Path.GetFileNameWithoutExtension(inputFilePath) + "_TokenFrequencies.csv"
);
// Dictionary to store token frequencies (case-insensitive)
var tokenFrequencies = new Dictionary<string, int>(StringComparer.OrdinalIgnoreCase);
// Regex to split by spaces and underscores, keeping only letters
// This regex will split on one or more spaces or underscores.
// We'll then filter out non-alphabetic tokens.
// Pattern: Matches one or more non-alphabetic characters or underscores.
var tokenSplitRegex = new Regex(@"[ _]+", RegexOptions.Compiled);
// Alternative for more general non-alphanumeric splitting (like your previous code)
// var tokenSplitRegex = new Regex(@"[^A-Z]+", RegexOptions.Compiled | RegexOptions.IgnoreCase);
Console.WriteLine("\nProcessing file...");
try
{
// Read all lines, skipping the header (first line)
var lines = File.ReadAllLines(inputFilePath);
// Assuming the header is: Word,Part of Speech,Definition,Examples,Synonyms,Antonyms
// We'll process content from relevant columns.
// These indices are 0-based
const int WORD_COL_INDEX = 0;
const int POS_COL_INDEX = 1; // You asked to split all columns, though POS is typically single char.
const int DEFINITION_COL_INDEX = 2;
const int EXAMPLES_COL_INDEX = 3;
const int SYNONYMS_COL_INDEX = 4;
const int ANTONYMS_COL_INDEX = 5;
// Iterate from the second line (index 1) to skip the header
for (int i = 1; i < lines.Length; i++)
{
string line = lines[i];
string[] columns = line.Split('\t'); // Assuming tab-separated as per common WordNet dumps or CSV-like structure, if it's strictly comma, change to ','
// Ensure we have enough columns to avoid index out of bounds
if (columns.Length <= ANTONYMS_COL_INDEX)
{
Console.WriteLine($"Skipping malformed line {i + 1}: Not enough columns.");
continue;
}
// Process each relevant column
ProcessColumn(columns[WORD_COL_INDEX], tokenFrequencies, tokenSplitRegex);
ProcessColumn(columns[POS_COL_INDEX], tokenFrequencies, tokenSplitRegex); // POS is usually 'n', 'v', etc. which will be treated as single-letter tokens.
ProcessColumn(columns[DEFINITION_COL_INDEX], tokenFrequencies, tokenSplitRegex);
ProcessColumn(columns[EXAMPLES_COL_INDEX], tokenFrequencies, tokenSplitRegex);
ProcessColumn(columns[SYNONYMS_COL_INDEX], tokenFrequencies, tokenSplitRegex);
ProcessColumn(columns[ANTONYMS_COL_INDEX], tokenFrequencies, tokenSplitRegex);
// Optional: Progress indicator for large files
if (i % 10000 == 0) // Update every 10,000 lines
{
Console.Write($"\rProcessed {i} lines...");
}
}
Console.WriteLine($"\rFinished processing {lines.Length - 1} data lines. "); // Clear line and show final count
}
catch (Exception ex)
{
Console.WriteLine($"\nAn error occurred during file processing: {ex.Message}");
Console.WriteLine("Press any key to exit.");
Console.ReadKey();
return;
}
// 3. Prepare the frequency report in descending order
Console.WriteLine("\nGenerating report...");
try
{
using (StreamWriter writer = new StreamWriter(outputFilePath))
{
writer.WriteLine("Token,Frequency"); // Header for the output CSV
foreach (var entry in tokenFrequencies.OrderByDescending(kv => kv.Value))
{
writer.WriteLine($"{entry.Key},{entry.Value}");
}
}
Console.WriteLine($"Report successfully written to: {outputFilePath}");
}
catch (Exception ex)
{
Console.WriteLine($"\nAn error occurred while writing the report: {ex.Message}");
}
Console.WriteLine("\nAnalysis complete. Press any key to exit.");
Console.ReadKey();
}
/// <summary>
/// Processes a single column's text, extracts tokens, and updates their frequencies.
/// </summary>
/// <param name="columnText">The string content of the column.</param>
/// <param name="tokenFrequencies">The dictionary to store token frequencies.</param>
/// <param name="splitter">The Regex object used for splitting tokens.</param>
private static void ProcessColumn(string columnText, Dictionary<string, int> tokenFrequencies, Regex splitter)
{
if (string.IsNullOrWhiteSpace(columnText))
{
return; // Skip empty columns
}
// Replace any double underscores with a single space, then split by spaces and underscores
// The example shows '_usually followed by to__', which indicates underscores as separators
// Your previous code had `Replace("______", " ")` - here we'll use regex for flexibility
// and just assume single/multiple underscores act as delimiters.
// If "______" needs to be explicitly replaced by a single space *before* splitting,
// you would add: columnText = columnText.Replace("______", " ");
// Split the string into potential tokens
string[] rawTokens = splitter.Split(columnText);
foreach (string rawToken in rawTokens)
{
// Clean and validate the token
// Trim whitespace, convert to uppercase invariant
string cleanedToken = rawToken.Trim().ToUpperInvariant();
// Filter: Must not be empty/whitespace and must contain only alphabetic characters
// This is based on your previous code's filtering: `System.Text.RegularExpressions.Regex.IsMatch(token, "^[A-Z]+$")`
// If you want to allow numbers too (like in '0.001'), change regex to "^[A-Z0-9]+$"
if (!string.IsNullOrWhiteSpace(cleanedToken) && Regex.IsMatch(cleanedToken, "^[A-Z]+$"))
{
// Update frequency
if (tokenFrequencies.ContainsKey(cleanedToken))
{
tokenFrequencies[cleanedToken]++;
}
else
{
tokenFrequencies[cleanedToken] = 1;
}
}
}
}
}
Explanation of the Code:
using Directives:
System: For basic console input/output and StringComparer.
System.Collections.Generic: For Dictionary and List.
System.IO: For file operations (File.ReadAllLines, StreamWriter, Path).
System.Linq: For OrderByDescending (an extension method for sorting).
System.Text.RegularExpressions: For Regex to split strings based on patterns.
Main Method:
User Input: Prompts the user to enter the path to their CSV file. Basic error handling checks if the file exists.
Output Path: Constructs an output file path (e.g., yourfile_TokenFrequencies.csv) in the same directory as the input file.
tokenFrequencies Dictionary: This is the core data structure (Dictionary<string, int>) that will store each unique token as a key and its count as the value. StringComparer.OrdinalIgnoreCase is used to ensure that "Word" and "word" are treated as the same token.
tokenSplitRegex: A Regex object new Regex(@"[ _]+", RegexOptions.Compiled) is defined.
@"[ _]+": This is the regular expression pattern.
[ _]: Matches either a space () or an underscore (_).
+: Matches one or more occurrences of the preceding character set.
So, it splits the string whenever it encounters one or more spaces or underscores.
RegexOptions.Compiled: Improves performance for repeated use of the regex.
Important Note on tokenSplitRegex: I've used [ _]+ to directly address your request for splitting by spaces AND underscores. Your previous code used [^A-Z0-9]+ which splits by any non-alphanumeric character. If you want to retain that broader splitting (e.g., splitting by hyphens, periods, etc.), uncomment the alternative tokenSplitRegex line.
File Reading Loop:
File.ReadAllLines(inputFilePath): Reads the entire file into a string[] array. For very, very large files (gigabytes), you might want to use File.ReadLines (which streams lines) to be more memory-efficient, but ReadAllLines is simpler for typical CSV sizes.
for (int i = 1; i < lines.Length; i++): Starts from i = 1 to skip the header row.
line.Split('\t'): Assumes your CSV is tab-separated. If it's truly comma-separated, change '\t' to ','. WordNet dumps are often tab-separated, so check your file.
Column Indices:const int variables are used for clarity to identify which column corresponds to which data type.
ProcessColumn(...): Calls a helper method to handle the token extraction and frequency counting for each relevant column's content.
Progress Indicator: A simple Console.Write("\rProcessed...") updates the console line to show progress for large files.
Error Handling: A try-catch block surrounds the file processing to gracefully handle IOExceptions or other issues.
Report Generation:
using (StreamWriter writer = new StreamWriter(outputFilePath)): Opens a file for writing, ensuring it's properly closed even if errors occur.
tokenFrequencies.OrderByDescending(kv => kv.Value): Uses LINQ to sort the dictionary entries by their frequency (kv.Value) in descending order.
The sorted tokens and their frequencies are written to the output CSV.
ProcessColumn Helper Method:
Input: Takes the columnText (string content of a column), the tokenFrequencies dictionary (to update), and the splitter regex.
string[] rawTokens = splitter.Split(columnText);: Uses the pre-compiled regex to split the column text into tokens.
Token Cleaning and Filtering:
string cleanedToken = rawToken.Trim().ToUpperInvariant();: Removes leading/trailing whitespace and converts to uppercase for consistent counting.
if (!string.IsNullOrWhiteSpace(cleanedToken) && Regex.IsMatch(cleanedToken, "^[A-Z]+$")): This is the crucial filtering step.
!string.IsNullOrWhiteSpace(cleanedToken): Ensures the token isn't just empty or spaces.
Regex.IsMatch(cleanedToken, "^[A-Z]+$"): Ensures the token consists only of uppercase English letters (A-Z). This will filter out numbers, punctuation, and mixed alphanumeric sequences.
Frequency Update: Increments the count for an existing token or adds the new token with a count of 1.
How to Run This Code:
Save: Save the code as a .cs file (e.g., WordNetAnalyzer.cs).
Compile: Open a command prompt or PowerShell and navigate to the directory where you saved the file. Then, compile it using the .NET SDK (if you have it installed):
Bash
dotnet new console -n WordNetAnalyzerApp
cd WordNetAnalyzerApp
# Replace the contents of Program.cs with the code above.
# OR, if you just have the .cs file:
csc WordNetAnalyzer.cs
If you're using Visual Studio, create a new Console Application project and paste the code into Program.cs.
Run:
Bash
dotnet run
# OR, if compiled with csc directly:
WordNetAnalyzer.exe
The program will ask you for the path to your WordNet CSV file. Provide it, and it will generate the frequency report.
This program should give you a solid foundation for analyzing your WordNet data according to your "Qhenomenology" requirements for tokenization and frequency distribution.
You're absolutely right to ask for clarification on those POS tags! WordNet uses a simplified set of Part-of-Speech (POS) tags compared to more granular tagsets like the Penn Treebank tagset (which has over 40 tags).
Here's what each of those single-letter WordNet POS tags means:
n: Noun
This covers all types of nouns: common nouns (e.g., dog, table, justice), proper nouns (e.g., Noida, Sanjoy Nath), mass nouns (e.g., water, information), and count nouns.
Example: car, apple, freedom, Delhi.
r: Adverb
This includes words that modify verbs, adjectives, other adverbs, or whole clauses.They often describe how, when, where, or to what extent something happens.
Example:quickly, very, here, always.
a: Adjective
These words describe or modify nouns and pronouns.
Example:big, happy, blue, important.
s: Adjective Satellite
This is a special category of adjective in WordNet. Adjective satellites are adjectives that derive their meaning from a direct conceptual opposition to another adjective. They are essentially synonyms of an adjective that has an antonym.
Think of it as adjectives that fall under a more general "cluster" of adjectives, often forming antonym pairs. For example, "dark" might be an adjective, but "light" (as an adjective meaning not dark) could be an adjective satellite if it's considered to define itself primarily by its opposition to "dark."
This distinction is often subtle and can be confusing. For most general NLP tasks, a and s are often treated similarly, but in WordNet, s is used to denote this specific semantic relationship.
Example:warm (as in "warm temperature" might be a, but cold if directly defined in opposition to warm, could be s). It's less common to encounter s as frequently as a.
v: Verb
These words express actions, occurrences, or states of being.
Example:run, eat, sleep, exist.
These five tags (n, r, a, s, v) are the primary and essentially only POS tags used within the core WordNet database for English. They are a highly condensed representation of the grammatical categories, designed to facilitate the semantic relationships within the network (like hypernyms, meronyms, etc.).
You're looking to bridge the gap between your conceptual "Qhenomenology" system and a practical C# implementation, specifically by leveraging a WordNet-like database. Before diving into the C++ class structure (which seems to be the ultimate goal of your Qhenomenology system's "compilability" rules), understanding the structure and content of your WordNet database is indeed crucial.
Let's break down the likely meanings of the columns you've listed from a typical WordNet database. It's important to note that while the column names are generally standard, the exact content and format can vary slightly between different WordNet distributions or custom datasets.
Here's an explanation of each column:
Synset ID:
Meaning: A unique identifier for a "synset." A synset (short for "set of synonyms") represents a single lexical concept. For example, "car," "automobile," and "auto" would belong to the same synset because they refer to the same concept. This ID allows you to unambiguously refer to a specific concept, regardless of which word is used to express it.
Qhenomenology Relevance: In your Qhenomenology system, if each word is a C++ class, then a Synset ID might represent a unique "concept ID" that ties together multiple "word classes" that are synonymous. This could be crucial for your "compilability" rules if you need to understand that CAR_CLASS and AUTOMOBILE_CLASS refer to the same underlying compiled "concept."
Word:
Meaning: The actual word (or multi-word expression) that is part of a particular synset.
Qhenomenology Relevance: This is likely your "C++ Class Name" as per your axioms. Each unique entry in this column, when combined with its Part of Speech, usually represents a distinct "word-sense."
Part of Speech (POS):
Meaning: The grammatical category of the Word. Common POS tags include:
n (noun)
v (verb)
a (adjective)
r (adverb)
Less common: s (adjective satellite - a type of adjective directly related to another adjective)
Qhenomenology Relevance: This is critically important for defining your C++ classes. "Bank" (financial institution) and "bank" (river bank) are different concepts despite being the same string. Their POS helps disambiguate them. Your Program___for_wordsnets_reordering_qhenomenology code already extracts posTag, which aligns with this. You would likely need BANK_NOUN_CLASS and BANK_VERB_CLASS (if it also had a verb sense) as distinct C++ classes in your system.
Definition:
Meaning: A brief, concise explanation or description of the meaning of the Synset ID (i.e., the concept that the Word in this Part of Speech represents).
Qhenomenology Relevance: This is the most direct translation to your "Column 2 of the Row=R is the constructor of that class" idea. The words within this definition are the W_I (instance variables) that need to be "pre-compiled" (i.e., appear earlier in your Qhenomenologically ordered dictionary). This is where the core "compilability ordering" logic will be applied.
Examples:
Meaning: Sample sentences or phrases illustrating the typical usage of the Word in the context of its Synset ID and Part of Speech.
Qhenomenology Relevance: While not directly used for "compilability" as per your constructor rule, these examples could be invaluable for:
Testing: Verifying if your "compilation" process correctly handles complex sentences or if the W_I dependencies are properly identified.
Contextualization: If your system needs to understand nuance beyond strict definition dependencies, examples provide the "usage patterns" that might influence your "awareness_ratio," "attentiveness_ratio," or "sensitivity_ratio" metrics later.
Synonyms:
Meaning: A list of other words (and often their associated POS, or implicitly the same POS as the main word) that have the same or very similar meaning to the Word within that specific Synset ID.
Qhenomenology Relevance: This directly ties into the Synset ID concept. If CAR_CLASS and AUTOMOBILE_CLASS are synonyms, they would belong to the same synset and should ultimately point to the same underlying "concrete concept" in your compiled system. This implies a need for handling aliases or shared underlying "compiled concept" instances.
Antonyms:
Meaning: Words that express the opposite meaning. (e.g., "hot" -> "cold").
Qhenomenology Relevance: Important for understanding semantic relationships. While not directly for "compilability," antonyms could be considered "dependent" concepts in a different way or for a later stage of "Qhenomenological analysis." For example, the HOT_CLASS might implicitly require the COLD_CLASS to be "compiled" to fully understand the "spectrum" of temperature.
Hypernyms:
Meaning: More general or broader concepts (IS-A relationships). (e.g., "car" is a type of "vehicle"; "dog" is a type of "canine" / "mammal" / "animal").
Qhenomenology Relevance: This is crucial for establishing your "strictly queued order" and "dependency chains." A VEHICLE_CLASS must "compile" before a CAR_CLASS can be fully defined, because CARis aVEHICLE. This directly supports your "more independent classes compile before" axiom.
Hyponyms:
Meaning: More specific or narrower concepts (the inverse of Hypernyms). (e.g., "vehicle" has hyponyms like "car," "truck," "bicycle").
Qhenomenology Relevance: These are the "dependent classes" that depend on the Word in the current row. CAR_CLASS would be a hyponym of VEHICLE_CLASS.
Meronyms:
Meaning: Parts of a whole (HAS-A relationships, or PART-OF relationships). (e.g., "wheel" is a meronym of "car"; "finger" is a meronym of "hand").
Qhenomenology Relevance: Another critical type of dependency for your "compilability" and "queued order." A CAR_CLASS "has a" WHEEL_CLASS. Therefore, WHEEL_CLASS must "compile" before CAR_CLASS can be fully "compiled" in this context. This is akin to object composition in C++.
Holonyms:
Meaning: The whole to which something is a part (the inverse of Meronyms). (e.g., "car" is a holonym for "wheel"; "hand" is a holonym for "finger").
Qhenomenology Relevance: If WHEEL_CLASS is compiled, it's a component of CAR_CLASS (its holonym). This reinforces the dependency direction.
Troponyms:
Meaning: More specific ways of doing something (IS-A relationships for verbs). (e.g., "sprint" is a troponym of "run"; "whisper" is a troponym of "speak").
Qhenomenology Relevance: Similar to Hypernyms/Hyponyms but specifically for verbs. RUN_CLASS would need to "compile" before SPRINT_CLASS.
Derivationally Related Forms:
Meaning: Words that are morphologically related but may have different parts of speech or subtle shifts in meaning. (e.g., "run" (verb) and "runner" (noun); "happy" (adjective) and "happiness" (noun)).
Qhenomenology Relevance: These might be treated as different classes, but their relationship could be encoded. For instance, HAPPY_ADJECTIVE_CLASS might influence the compilation of HAPPINESS_NOUN_CLASS. This relates to your idea of "Character class is sub class of Bit class .. Byte class."
Lexical Relations:
Meaning: A broader category that might include various other semantic relationships not covered by the more specific categories above (e.g., attribute relations, domain relations).
Qhenomenology Relevance: This could be a catch-all for additional "dependencies" or "instance variables" that aren't strict hierarchies or part-of relationships, but still contribute to a word's "compilability" or "meaning" within your system. You'd need to examine the content to understand these specific relationships.
Word Frequency:
Meaning: How often a particular Word (or word-sense) appears in a given corpus of text.
Qhenomenology Relevance: Your tokenFrequencies calculation in the C# code directly correlates with this. While your axioms state "probability is backdated measure," you also introduce "awareness_ratio," "attentiveness_ratio," and "sensitivity ratio" which likely rely on frequency or usage data. This column could inform those calculations. It also relates to the idea of "regular use vocabulary."
Senses:
Meaning: Often a numerical identifier for a specific sense of a word if a word has multiple meanings (e.g., "bank" sense 1: financial institution, "bank" sense 2: river edge). This might be redundant with Synset ID if each row already represents a unique word-sense, or it might refine the Synset ID further.
Qhenomenology Relevance: Critical for distinguishing between different "C++ classes" that share the same string name but have different meanings (e.g., BANK_NOUN_SENSE1_CLASS vs. BANK_NOUN_SENSE2_CLASS).
Sentiment:
Meaning: The emotional tone or polarity associated with the word or concept (e.g., positive, negative, neutral). This often comes from external sentiment lexicons rather than being a core WordNet concept itself.
Qhenomenology Relevance: Your axioms explicitly state, "Axiom All human emotions are also just cpp class name They don't have any meaning." This directly clashes with the concept of "Sentiment." If emotions "don't have any meaning" in your system, then this column's data would be considered "manipulative" or "non-compilable" as it introduces subjective, non-concrete (by your definition) information. You would likely choose to either ignore this column or use it to identify "manipulative" words/concepts that carry sentiment.
How this relates to your C# implementation and "Qhenomenology":
Your RowData___for_wordsnets_qhenomenology_reordering class needs to be expanded to hold this richer information.
The Definition, Hypernyms, Meronyms, and Troponyms columns are the primary candidates for the "words used in column 2" or "instance variables of pre compiled classes" that dictate your "compilability ordering."
You'll need a sophisticated parsing and dependency tracking mechanism. For each word/concept, you'll have to:
Identify its ClassName (from Word + Part of Speech + possibly Sense).
Extract all potential W_I (dependencies) from its Definition, Hypernyms, Meronyms, Troponyms, and potentially Lexical Relations.
Build a dependency graph.
Implement a topological sort (or a similar algorithm that respects your "compilability closure principles") to establish the "strictly queued order."
Any cycles in this graph would indicate "non-compilability" and thus "manipulation" as per your axioms.
The TOTAL_TOKENS_FOUND_IN_CURRENT_CLASS_ROW in your current RowData class is a good start, but you'll need to distinguish which tokens are actual dependencies versus just descriptive words.
Understanding these columns is the critical first step to applying your "compilability" logic to a real-world linguistic dataset.
You're absolutely right — before translating your Qhenomenology system into a C++ class-based structure, we must build a Qhenomenology Analyzer in C# that can parse WordNet-style databases and enforce CPP-style dependency validations for abstract concepts like Consciousness, Justice, Frequency, etc.
✅ Step 1: Understanding WordNet Column Meanings
To build this analyzer, we first map each column in your WordNet-based database to Qhenomenological structures.
Here is the full breakdown of each column:
Column Name
Qhenomenology Interpretation
Synset ID
Unique Concept ID — maps to a C++ class name.
Word
The label (variable name) for the CPP class.
Part of Speech
Helps define class type hierarchy (e.g., noun = object class, verb = function class, adj = attribute).
Definition
Formal comment/summary of the class. Used to validate vague or circular definitions.
Examples
Instantiation traces — literal use-cases of class. Important for deducing constructor logic.
Synonyms
Equivalent classes (same behavior, different names). Maps to typedef or alias in C++.
Antonyms
Logical opposite classes. Required for consistency validation.
Hypernyms
Parent classes (base class in inheritance).
Hyponyms
Subclasses / Derived classes.
Meronyms
Component members (class members inside). Example: Car → Engine.
Holonyms
Containers / Aggregates that include this word. E.g., Wheel → Car.
Troponyms
Specialized actions derived from base verbs. Useful for function overriding.
Derivationally Related Forms
Morphological relatives — often maps to methods or derived concept forms.
Lexical Relations
Other related classes. May suggest namespace or package-level relationships.
Word Frequency
Usage priority — helpful in class importance ranking (not for compilation).
Senses
Number of polysemous senses. If more than one, needs disambiguation or overloads.
Sentiment
Emotive attribute (AXIOM: must be removed unless explicitly defined).
✅ Step 2: Classify Columns for Qhenomenology Compilation System
✔️ Required for Compilation Dependency Check:
Synset ID
Word
Part of Speech
Definition
Synonyms
Hypernyms
Hyponyms
Meronyms
Holonyms
🟡 Optional / Semantic Enhancers:
Derivationally Related Forms
Lexical Relations
Examples
🔴 Must be Filtered (Not Allowed in Reasoning System without Definition):
Sentiment
Word Frequency
Examples (if emotionally charged)
✅ Step 3: Suggested Structure for Qhenomenology C# Analyzer Class
csharp
public class QhenomenologyConcept
{
public string SynsetID;
public string Word;
public string PartOfSpeech;
public string Definition;
public List<string> Synonyms = new List<string>();
public List<string> Antonyms = new List<string>();
public List<string> Hypernyms = new List<string>();
public List<string> Hyponyms = new List<string>();
public List<string> Meronyms = new List<string>();
public List<string> Holonyms = new List<string>();
public List<string> Troponyms = new List<string>();
public List<string> DerivationalForms = new List<string>();
public List<string> LexicalRelations = new List<string>();
public int WordFrequency;
public int SenseCount;
public string Sentiment;
public bool IsCompilable(QhenomenologyDictionary dict)
{
// Return false if Hypernyms or Meronyms reference undefined Words
// Return false if Definition uses undefined Words
// Return false if Sentiment != null and Emotion not yet defined
return true;
}
}
✅ Step 4: Compilation Validation Algorithm Sketch
csharp
public class QhenomenologyCompiler
{
public Dictionary<string, QhenomenologyConcept> Concepts;
public void LoadFromCSV(string filePath)
{
// Read and parse all rows into Concepts
}
public List<string> GetUncompilableConcepts()
{
var result = new List<string>();
foreach (var concept in Concepts.Values)
{
if (!concept.IsCompilable(this))
result.Add(concept.Word);
}
return result;
}
public void ExportDependencyGraph(string outputDotFile)
{
// Use Graphviz format: show inheritance (hypernyms), aggregation (meronyms), containment (holonyms)
}
}
✅ Do You Want These Implemented?
I can generate:
🧠 A working C# project that reads this CSV-format WordNet and checks for uncompiled class structures.
📊 A Graphviz .dot file exporter showing dependency graphs.
🔍 An analyzer that finds circular, unresolvable, or emotionally undefined terms.
🚫 A sentiment sanitizer that flags all classes with non-neutral definitions.
Would you like to proceed with this C# implementation now? I can begin with the class and parser immediately. Just confirm the exact CSV column order and share a small sample (if needed).
No comments:
Post a Comment