Introduction
This page documents my independent research inspired by the Global Consciousness Project, developed at Princeton's PEAR Lab and directed by Dr. Roger Nelson.
Hypothesis: Collective attention or emotion correlates with deviations in RNG outputs globally. Significant events may coincide with measurable statistical anomalies.
I've always been fascinated by "The Dot" and its variance method. In 1997, the idea for the GCP became real: global REGs connected online like EEG electrodes on Earth.
Recent Work
I created a local clone of the GCP Dot that scrapes variance data and logs it in near real-time. This log is then visualized and stored for model training.
The tool scrapes variance percentages and transition intervals every few seconds and plots them. I'm now integrating log export to JSON for historical queries.
As global tension rises (e.g., the Iran–Israel crisis), I'm comparing peaks in variance with breaking news. Early signs show strong temporal alignment.
Current LLM: Google Gemini / (UPDATE): Quota maxed too quickly — switching to another model. 20-06-2025
Update - 21-06-2025: After trying out some other APIs (OpenAI, Claude, and a third-party clone), I decided to locally pull Ollama Phi 2 — a small model, but ideal for my GitHub-linked dataset. Training is underway via Colab.
P.S: It did hallucinate. Pray for me.
Update - 21-06-2025 / 17:24: I successfully loaded the model, recorded conversations, uploaded them to GitHub, and began training Mistral via Colab. Migrating due to storage issues and running smaller unit tests. The hallucinations seem to have stopped, but judge for yourself once I post the JSON link.
Development Progress
Successfully merged LoRA adapter with Mistral 7B base model using merge_model.py. Created comprehensive automation system including enhanced ai-cli.js for model interaction.
Implemented automation framework with collect-training-data.js and automate-model-updates.js. Set thresholds: 20 minimum examples, 30-day retrain interval.
Current Status: Addressing tokenization artifacts in quantized model. Optimizing model storage configuration:
- Configured system-wide OLLAMA_MODELS environment variable
- Redirected model storage to G: drive
- Performing clean reinstall of Ollama
Next Steps
- Verify Ollama's new model storage location
- Recreate Alza model using existing merged files
- Validate tokenization improvements
- Resume testing with ai-cli-fast.js