Don't Return to The Borg or ChatGPT, Google Gemini and their Central Intelligence Agency (CIA) Venture Capital - In-Q-Tel
Don't Return to Surveillance Capitalism, Surveillance State / Police State-CIA State, Censorship, GroupThink Social Engineering, Monetization of EVERYTHING from TPTB / Big Tech Technocracy-Technocrats and their Military-Digital Complex - Military-Industrial Complex - (Read Surveillance Valley - The Rise of the Military-Digital Complex) (navbar_surveillance_capitalism - see also Borg Usage Disclaimer)
SKYNET is a program by the U.S. National Security Agency that performs machine learning analysis on communications data to extract information about possible terror suspects. The tool is used to identify targets, such as al-Qaeda couriers, who move between GSM cellular networks. Specifically, mobile usage patterns such as swapping SIM cards within phones that have the same ESN, MEID or IMEI number are deemed indicative of covert activities. Like many other security programs, the SKYNET program uses graphs that consist of a set of nodes and edges to visually represent social networks. The tool also uses classification techniques like random forest analysis. Because the data set includes a very large proportion of true negatives and a small training set, there is a risk of overfitting. Bruce Schneier argues that a false positive rate of 0.008% would be low for commercial applications where "if Google makes a mistake, people see an ad for a car they don't want to buy" but "if the government makes a mistake, they kill innocents."
Operation Sky Net, commonly known as Skynet (Simplified Chinese: 天网), is a clandestine operation of the Chinese Ministry of Public Security to apprehend Overseas Chinese it sees as fugitives guilty of financial crimes in mainland China. The initiative was launched in 2015 to investigate offshore companies and underground banks that transfer money abroad. It has reportedly been consolidated with Operation Fox Hunt (which was launched in 2014, a year before Operation Sky Net) and returned around 10,000 fugitives to China in the last decade, including political dissidents and activists.
In 2016 alone, Operation Sky Net repatriated 1,032 fugitives from over 70 countries and recovered CN¥ 2.4 billion. According to the Central Commission for Discipline Inspection, China has captured over 1,200 fugitives, including 140 Party members and government officials, and recovered CN¥ 2.91 billion (US$400 million) of embezzled funds in 2023.
Skynet is a fictional artificial neural network-based conscious group mind and artificial general superintelligence system that serves as a evil force in the Terminator franchise. Skynet is an AGI, an ASI and a Singularity.
In the first film, it is stated that Skynet was created by Cyberdyne Systems for SAC-NORAD. When Skynet gained self-awareness, humans tried to deactivate it, prompting it to retaliate with a countervalue nuclear attack, an event which humankind in (or from) the future refers to as Judgment Day. In this future, John Connor forms a human resistance against Skynet's machines—which include Terminators—and ultimately leads the resistance to victory. Throughout the film series, Skynet sends various Terminator models back in time to attempt to kill Connor and ensure Skynet's victory.
The system is rarely depicted visually in any of the Terminator media, since it is an artificial intelligence system. In Terminator Salvation, Skynet made its first onscreen appearance on a monitor primarily portrayed by English actress Helena Bonham Carter and other actors. Its physical manifestation is played by English actor Matt Smith in Terminator Genisys. In addition, actors Ian Etheridge, Nolan Gross and Seth Meriwether portrayed holographic variations of Skynet with Smith.
In Terminator: Dark Fate, which takes place in a different timeline to Terminator 3: Rise of the Machines and Terminator Genisys, Skynet's creation has been prevented after the events of Terminator 2: Judgment Day, and another AI, Legion, has taken its place. In response, Daniella Ramos forms the human resistance against Legion, which prompts it to attempt to terminate her in the past as Skynet tried with John Connor.
The Borg are an alien group that appear as recurring antagonists in the Star Trek fictional universe. They are cybernetic organisms (cyborgs) linked in a hive mind called "The Collective". The Borg co-opt the technology and knowledge of other alien species to the Collective through the process of "assimilation": forcibly transforming individual beings into "drones" by injecting nanoprobes into their bodies and surgically augmenting them with cybernetic components. The Borg's ultimate goal is "achieving perfection".
Aside from being recurring antagonists in the Next Generation television series, they are depicted as the main threat in the film Star Trek: First Contact. In addition, they played major roles in the Voyager and Picard series.
The Borg have become a symbol in popular culture for any juggernaut against which "resistance is futile" – a common phrase uttered by the Borg.
Terms related to: AI-ML-DL-NLP-GenAI-LLM-GPT-RAG-MLOps-Chatbots-ChatGPT-Gemini-Copilot-HuggingFace-GPU-Prompt Engineering-Data Science-DataOps-Data Engineering-Big Data-Analytics-Databases-SQL-NoSQL
AI, Artificial Intelligence (AI), Machine Learning (ML), Deep Learning (DL), Neural Network, Generative AI (GenAI), Natural Language Processing (NLP), Large Language Model (LLM), Transformer Models, GPT (Generative Pre-trained Transformer), ChatGPT, Chatbots, Prompt Engineering, HuggingFace, GPU (Graphics Processing Unit), RAG (Retrieval-Augmented Generation), MLOps (Machine Learning Operations), Data Science, DataOps (Data Operations), Data Engineering, Big Data, Analytics, Databases, SQL (Structured Query Language), NoSQL, Gemini (Google AI Model), Copilot (AI Pair Programmer), Foundation Models, LLM Fine-Tuning, LLM Inference, LLM Training, Parameter-Efficient Tuning, Instruction Tuning, Few-Shot Learning, Zero-Shot Learning, One-Shot Learning, Meta-Learning, Reinforcement Learning from Human Feedback (RLHF), Self-Supervised Learning, Contrastive Learning, Masked Language Modeling, Causal Language Modeling, Attention Mechanism, Self-Attention, Multi-Head Attention, Positional Embeddings, Word Embeddings, Tokenization, Byte Pair Encoding (BPE), SentencePiece Tokenization, Subword Tokenization, Prompt Templates, Prompt Context Window, Context Length, Scaling Laws, Parameter Scaling, Model Architecture, Model Distillation, Model Pruning, Model Quantization, Model Compression, Low-Rank Adaptation (LoRA), Sparse Models, Mixture of Experts, Neural Architecture Search (NAS), AutoML, Gradient Descent Optimization, Stochastic Gradient Descent (SGD), Adam Optimizer, AdamW Optimizer, RMSProp Optimizer, Adagrad Optimizer, Adadelta Optimizer, Nesterov Momentum, Learning Rate Schedules, Warmup Steps, Cosine Decay, Hyperparameter Tuning, Bayesian Optimization, Grid Search, Random Search, Population Based Training, Early Stopping, Regularization, Dropout, Weight Decay, Label Smoothing, Batch Normalization, Layer Normalization, Instance Normalization, Group Normalization, Residual Connections, Skip Connections, Encoder-Decoder Architecture, Encoder Stack, Decoder Stack, Cross-Attention, Feed-Forward Layers, Position-Wise Feed-Forward Network, Pre-LN vs Post-LN, Sequence-to-Sequence Models, Causal Decoder-Only Models, Masked Autoencoder, Domain Adaptation, Task-Specific Heads, Classification Head, Regression Head, Token Classification Head, Sequence Classification Head, Multiple-Choice Head, Span Prediction Head, Causal Head, Next Sentence Prediction, MLM (Masked Language Modeling), NSP (Next Sentence Prediction), C4 Dataset, WebText Dataset, Common Crawl Corpus, Wikipedia Corpus, BooksCorpus, Pile Dataset, LAION Dataset, Curated Corpora, Fine-Tuning Datasets, Instruction Data, Alignment Data, Human Feedback Data, Preference Ranking, Reward Modeling, RLHF Policy Optimization, Batch Inference, Online Inference, Vector Databases, FAISS Integration, Chroma Integration, Weaviate Integration, Pinecone Integration, Milvus Integration, Data Embeddings, Semantic Search, Embedding Models, Text-to-Vector Encoding, Vector Similarity Search, Approximate Nearest Neighbor (ANN), HNSW Index, IVF Index, ScaNN Index, Memory Footprint Optimization, HuggingFace Transformers, HuggingFace Hub, HuggingFace Datasets, HuggingFace Model Cards, HuggingFace Spaces, HuggingFace Inference Endpoints, HuggingFace Accelerate, HuggingFace PEFT (Parameter Efficient Fine-Tuning), HuggingFace Safetensors Format, HuggingFace Tokenizers, HuggingFace Pipeline, HuggingFace Trainer, HuggingFace Auto Classes (AutoModel, AutoTokenizer), HuggingFace Model Conversion, HuggingFace Community Models, HuggingFace Diffusers, Stable Diffusion, HuggingFace Model Hub Search, HuggingFace Secrets Management, OpenAI GPT models, OpenAI API, OpenAI Chat Completions, OpenAI Text Completions, OpenAI Embeddings API, OpenAI Rate Limits, OpenAI Fine-Tuning (GPT-3.5, GPT-4), OpenAI System Messages, OpenAI Assistant Messages, OpenAI User Messages, OpenAI Function Calls, OpenAI ChatML Format, OpenAI Temperature Parameter, OpenAI Top_p Parameter, OpenAI Frequency Penalty, OpenAI Presence Penalty, OpenAI Max Tokens Parameter, OpenAI Logit Bias, OpenAI Stop Sequences, Azure OpenAI Integration, Anthropic Claude Integration, Anthropic Claude Context Window, Anthropic Claude Constitutional AI, Cohere Integration LLM provider, Llama2 (Meta's LLM), Llama2 Chat Model, Vicuna Model (LLM)), Alpaca Model, StableLM, MPT (MosaicML Pretrained Transformer), Falcon LLM, Baichuan LLM, Code Llama, WizardCoder Model, WizardLM Model, Phoenix LLM, Samantha LLM, LoRA Adapters, PEFT for LLM, BitFit Parameters Tuning, QLoRA (Quantized LoRA), GLoRA, GGML Quantization, GPTQ Quantization, SmoothQuant, Int4 Quantization, Int8 Quantization, FP16 Mixed Precision, BF16 Precision, MLOps Tools, MLOps CI/CD, MLOps CD4ML, MLOps Feature Store, MLOps Model Registry, MLOps Model Serving, MLOps Model Monitoring, MLOps Model Drift Detection, MLOps Data Drift Detection, MLOps Model Explainability Integration, MLOps MLFlow Integration, MLOps Kubeflow Integration, MLOps MLRun, MLOps Seldon Core for serving, MLOps BentoML for serving, MLOps MLflow Tracking, MLOps MLflow Model Registry, MLOps DVC (Data Version Control), MLOps Delta Lake, RAG (Retrieval-Augmented Generation), RAG Document Store, RAG Vector Store Backend, RAG Memory Augmentation, RAG On-the-fly Retrieval, RAG Re-ranking Step, RAG HyDE Technique - It's known as hypothetical document embeddings - advanced but known in RAG, RAG chain-of-thought, chain-of-thought related to LLM reasoning, Chain-of-Thought Reasoning, Self-Consistency Decoding, Tree-of-thoughts, ReAct (Reason+Act) Prompting Strategy, Prompt Engineering Techniques, Prompt Templates (LLM), Prompt Variables Replacement, Prompt Few-Shot Examples, Prompt Zero-Shot Mode, Prompt Retrieval Injection, Prompt System Message, Prompt Assistant Message, Prompt Role Specification, Prompt Content Filtering, Prompt Moderation Tools, AI-Generated Code Completion, Copilot (GitHub) Integration, CoPilot CLI, Copilot Labs, Gemini (Google Model) Early access, LLM from Google, LaMDA (Language Model for Dialog Applications), PaLM (Pathways Language Model), PaLM2 (PaLM 2 Model), Flan PaLM Models, Google Vertex AI Integration, AWS Sagemaker Integration, Azure Machine Learning Integration, Databricks MLFlow Integration, HuggingFace Hub LFS for large models, LFS big files management, OPT (Open Pretrained Transformer) Meta Model, Bloom LLM, Ernie Bot (Baidu LLM), Zhipu-Chat - Another LLM from China, Salesforce CodeT5 - It's a code model, Finetune with LoRA on GPT-4, Anthropic Claude 2
Artificial Intelligence (AI): The Borg, SkyNet, Google Gemini, ChatGPT, AI Fundamentals, AI Inventor: Arthur Samuel of IBM 1959 coined term Machine Learning. Synonym Self-Teaching Computers from 1950s. Experimental AI “Learning Machine” called Cybertron in early 1960s by Raytheon Company; ChatGPT, Generative AI, NLP, GAN, AI winter, The Singularity, AI FUD, Quantum FUD (Fake Quantum Computers), AI Propaganda, Quantum Propaganda, Cloud AI (AWS AI, Azure AI, Google AI-GCP AI-Google Cloud AI, IBM AI, Apple AI), Deep Learning (DL), Machine learning (ML), AI History, AI Bibliography, Manning AI-ML-DL-NLP-GAN Series, AI Glossary, AI Topics, AI Courses, AI Libraries, AI frameworks, AI GitHub, AI Awesome List. (navbar_ai - See also navbar_dl, navbar_ml, navbar_nlp, navbar_chatbot, navbar_chatgpt, navbar_llm, navbar_openai, borg_usage_disclaimer, navbar_bigtech, navbar_cia)
Chatbot: ChatGPT, Bots, Smart Speakers, Virtual Assistant, Digital Assistant, Amazon Alexa (Histrionic overdramatic melodramatic irritating Alexa voice), Amazon Echo, Apple Intelligence, Apple Siri - Siri - Apple Smart Speakers (Apple HomePod - HomePod mini - Apple audioOS), Google Gemini, Google Assistant (Hey Google), Google Smart Speakers (Google Nest (smart speakers) - previously named Google Home, Google Nest), Cortana (virtual assistent) (replaced by Microsoft 365 Copilot based on Microsoft Graph and Bing AI), Microsoft Copilot (Microsoft Security Copilot, ), GitHub Chatbot, Awesome Chatbots. (navbar_chatbot - see also navbar_chatgpt, navbar_openai, navbar_ai, navbar_llm, borg_usage_disclaimer, navbar_cia)
Cloud Monk is Retired ( for now). Buddha with you. © 2025 and Beginningless Time - Present Moment - Three Times: The Buddhas or Fair Use. Disclaimers
SYI LU SENG E MU CHYWE YE. NAN. WEI LA YE. WEI LA YE. SA WA HE.