Die meisten Unternehmen kämpfen mit den mit dem KI -Einsatz verbundenen Kosten und Latenz. Dieser Artikel zeigt, wie Sie ein hybrides System erstellen, das:

  • Verfahren Sie 94,9% der Anfragen auf Kantengeräte (Antwortzeiten von Sub-20 ms)
  • Reduziert die Inferenzkosten im Vergleich zu Cloud-Lösungen um 93,5%
  • Hält 99,1% der ursprünglichen Modellgenauigkeit durch intelligente Quantisierung bei
  • Hält wise Daten lokal, um die Einhaltung der Einhaltung zu erleichtern

Wir werden die vollständige Implementierung mit Code von der Domänenanpassung bis zur Produktionsüberwachung durchlaufen.

Das eigentliche Downside, von dem niemand spricht

Stellen Sie sich Folgendes vor: Sie haben ein ausgezeichnetes KI -Modell für den Kundensupport erstellt. Es funktioniert hervorragend in Ihrem Jupyter -Notizbuch. Wenn Sie es jedoch für die Produktion einsetzen, stellen Sie fest:

  • Cloud Inferenz kostet 2.900 USD/Monat für anständige Verkehrsmengen
  • Reaktionszeiten schweben etwa 200 ms (Kunden bemerken die Verzögerung)
  • Information kreuzt die internationalen Grenzen (Compliance -Staff ist nicht glücklich)
  • Kosten skalieren unvorhersehbar mit Verkehrspikes

Klingt vertraut? Du bist nicht allein. Laut Forbes Tech Council (2024) können bis zu 85% der KI -Modelle den erfolgreichen Einsatz nicht erreichen, wobei die Kosten und die Latenz die Hauptbarrieren sind.

Die Lösung: Denken Sie wie Flughafensicherheit nach

Was wäre, wenn wir könnten, anstatt jede Abfrage an ein massives Cloud -Modell zu senden:

  • Behandeln Sie 95% der Routineanfragen vor Ort (wie die schnelle Spur des Flughafens Safety)
  • Eskalieren Sie nur komplexe Fälle in die Cloud (sekundäres Screening)
  • Führen Sie eine klare Aufzeichnung von Routing -Entscheidungen (für Audits) eindeutig.

Dieser „Kanten“ -Ansatz spiegelt wider, wie Menschen natürlich mit Unterstützungsanfragen umgehen. Erfahrene Agenten können die meisten Probleme schnell lösen und nur die kniffligen Spezialisten eskalieren.

Modellaktualisierungen von Edge und Cloud-Austausch und anonymisierte Daten in einem von Kubernetes verwalteten Hybrid-AI-Mechanismus (Bild des Autors)

Was wir zusammen aufbauen werden

Am Ende dieses Artikels haben Sie:

  1. Ein domänen angepasstes Modell Das versteht die Kundendienstsprache
  2. Eine 84% kleinere quantisierte Model Das läuft schnell auf der CPU
  3. Ein intelligenter Router Das entscheidet über Edge vs. Cloud professional Abfrage
  4. Produktionsüberwachung Alles gesund zu halten

Beginnen wir mit dem Codieren.

Umgebungsaufbau: Es wird vom ersten Tag an richtig gestrichen

Lassen Sie uns zunächst eine reproduzierbare Umgebung schaffen. Nichts tötet Schwung als einen Tag damit, Bibliothekskonflikte zu debuggen.

import os
import warnings
import numpy as np
import pandas as pd
import torch
import tensorflow as tf
from transformers import (
    DistilBertTokenizerFast, DistilBertForMaskedLM, 
    Coach, TrainingArguments, TFDistilBertForSequenceClassification
)
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
import onnxruntime as ort
import time
from collections import deque

def setup_reproducible_environment(seed=42):
    """Make outcomes reproducible throughout runs"""
    np.random.seed(seed)
    torch.manual_seed(seed)
    tf.random.set_seed(seed)
    torch.backends.cudnn.deterministic = True
    tf.config.experimental.enable_op_determinism()
    warnings.filterwarnings('ignore')
    print(f"✅ Atmosphere configured (seed: {seed})")   

setup_reproducible_environment()

# {Hardware} specs for replica
SYSTEM_CONFIG = {
    "cpu": "Intel Xeon Silver 4314 @ 2.4GHz",
    "reminiscence": "64GB DDR4", 
    "os": "Ubuntu 22.04",
    "python": "3.10.12",
    "key_libs": {
        "torch": "2.7.1",
        "tensorflow": "2.14.0", 
        "transformers": "4.52.4",
        "onnxruntime": "1.17.3"
    }
}

# Venture construction
PATHS = {
    "knowledge": "./knowledge",
    "fashions": {
        "domain_adapted": "./fashions/dapt",
        "classifier": "./fashions/classifier",
        "onnx_fp32": "./fashions/onnx/model_fp32.onnx", 
        "onnx_quantized": "./fashions/onnx/model_quantized.onnx"
    },
    "logs": "./logs"
}

# Create directories
for path in PATHS.values():
    if isinstance(path, dict):
        for p in path.values():
            os.makedirs(os.path.dirname(p) if '.' in os.path.basename(p) else p, exist_ok=True)
    else:
        os.makedirs(path, exist_ok=True)

print("📁 Venture construction prepared")  # IMPROVED: Added emoji for consistency

Schritt 1: Domänenanpassung – Lehre KI, „Unterstützung“ zu sprechen, „Unterstützung“ zu sprechen.

Reguläre Sprachmodelle kennen Englisch, aber sie wissen nicht, wie es geht Englisch unterstützen. Es gibt einen großen Unterschied zwischen „Ich brauche Hilfe“ und „Das ist völlig inakzeptabel – ich fordere sofort mit einem Supervisor zu sprechen!“

Area-adaptive Pre-Coaching (DAPT) befasst sich mit dem Sprachlernen des Modells in den Kundendienstkonversationen, bevor Sie es zur Klassifizierung ausbilden.

class CustomerServiceTrainer:
    """Full pipeline for area adaptation + classification"""
    
    def __init__(self, base_model="distilbert-base-uncased"):
        self.base_model = base_model
        self.tokenizer = DistilBertTokenizerFast.from_pretrained(base_model)
        print(f"🤖 Initialized with {base_model}")   
    
    def domain_adaptation(self, texts, output_path, epochs=2, batch_size=32):
        """
        Section 1: Adapt mannequin to customer support language patterns
        
        That is like language immersion - the mannequin learns support-specific 
        vocabulary, escalation phrases, and customary interplay patterns.
        """
        from datasets import Dataset
        from transformers import DataCollatorForLanguageModeling
        
        print(f"📚 Beginning area adaptation on {len(texts):,} conversations...")  
        
        # Create dataset for masked language modeling
        dataset = Dataset.from_dict({"textual content": texts}).map(
            lambda examples: self.tokenizer(
                examples("textual content"), 
                padding="max_length", 
                truncation=True, 
                max_length=128  # Maintain cheap for reminiscence
            ), 
            batched=True,
            remove_columns=("textual content")
        )
        
        # Initialize mannequin for continued pre-training
        mannequin = DistilBertForMaskedLM.from_pretrained(self.base_model)
        print(f"   📊 Mannequin parameters: {mannequin.num_parameters():,}")   
        
        # Coaching setup
        training_args = TrainingArguments(
            output_dir=output_path,
            num_train_epochs=epochs,
            per_device_train_batch_size=batch_size,
            logging_steps=200,
            save_steps=1000,
            fp16=torch.cuda.is_available(),  # Use combined precision if GPU out there
        )
        
        coach = Coach(
            mannequin=mannequin,
            args=training_args,
            train_dataset=dataset,
            data_collator=DataCollatorForLanguageModeling(
                self.tokenizer, multilevel marketing=True, mlm_probability=0.15
            )
        )
        
        # Practice and save
        coach.prepare()
        coach.save_model(output_path)
        self.tokenizer.save_pretrained(output_path)
        
        print(f"✅ Area adaptation full: {output_path}")   
        return output_path
    
    def train_classifier(self, X_train, X_val, y_train, y_val, 
                        dapt_model_path, output_path, epochs=8):
        """
        Section 2: Two-stage classification coaching
        
        Stage 1: Heat up classifier head (spine frozen)
        Stage 2: Tremendous-tune complete mannequin with smaller studying fee
        """
        from transformers import create_optimizer
        
        print(f"🎯 Coaching classifier on {len(X_train):,} samples...")   
        
        # Encode labels
        self.label_encoder = LabelEncoder()
        y_train_enc = self.label_encoder.fit_transform(y_train)
        y_val_enc = self.label_encoder.rework(y_val)
        
        print(f"   📊 Lessons: {checklist(self.label_encoder.classes_)}")  
        
        # Create TensorFlow datasets
        def make_dataset(texts, labels, batch_size=128, shuffle=False):
            encodings = self.tokenizer(
                texts, padding="max_length", truncation=True,
                max_length=256, return_tensors="tf"  # Longer for classification
            )
            dataset = tf.knowledge.Dataset.from_tensor_slices((dict(encodings), labels))
            if shuffle:
                dataset = dataset.shuffle(10000, seed=42)
            return dataset.batch(batch_size).prefetch(tf.knowledge.AUTOTUNE)
        
        train_dataset = make_dataset(X_train, y_train_enc, shuffle=True)
        val_dataset = make_dataset(X_val, y_val_enc)
        
        # Load domain-adapted mannequin for classification
        mannequin = TFDistilBertForSequenceClassification.from_pretrained(
            dapt_model_path, num_labels=len(self.label_encoder.classes_)
        )
        
        # Optimizer with warmup
        total_steps = len(train_dataset) * epochs
        optimizer, _ = create_optimizer(
            init_lr=3e-5,
            num_train_steps=total_steps,
            num_warmup_steps=int(0.1 * total_steps)
        )
        
        mannequin.compile(
            optimizer=optimizer,
            loss='sparse_categorical_crossentropy',
            metrics=('accuracy')
        )
        
        # Stage 1: Classifier head warm-up
        print("   🔥 Stage 1: Warming up classifier head...")   
        mannequin.distilbert.trainable = False
        mannequin.match(train_dataset, validation_data=val_dataset, epochs=1, verbose=1)
        
        # Stage 2: Full fine-tuning  
        print("   🔥 Stage 2: Full mannequin fine-tuning...")   
        mannequin.distilbert.trainable = True
        mannequin.optimizer.learning_rate = 3e-6  # Smaller LR for stability
        
        # Add callbacks for higher coaching
        callbacks = (
            tf.keras.callbacks.EarlyStopping(endurance=2, restore_best_weights=True),
            tf.keras.callbacks.ReduceLROnPlateau(issue=0.5, endurance=1)
        )
        
        historical past = mannequin.match(
            train_dataset, 
            validation_data=val_dataset,
            epochs=epochs-1,  # Already did 1 epoch
            callbacks=callbacks,
            verbose=1
        )
        
        # Save every thing
        mannequin.save_pretrained(output_path)
        self.tokenizer.save_pretrained(output_path)
        
        import joblib
        joblib.dump(self.label_encoder, f"{output_path}/label_encoder.pkl")
        
        best_acc = max(historical past.historical past('val_accuracy'))
        print(f"✅ Coaching full! Finest accuracy: {best_acc:.4f}")   
        
        return mannequin, historical past

# Let's create some pattern knowledge for demonstration
def create_sample_data(n_samples=5000):
    """Generate practical customer support knowledge for demo"""
    np.random.seed(42)
    
    # Pattern dialog templates
    templates = {
        'constructive': (
            "Thanks a lot for the wonderful customer support at this time!",
            "Nice job resolving my situation shortly and professionally.",
            "I actually admire the assistance with my account.",
            "The help group was improbable and really educated.",
            "Good service, precisely what I wanted."
        ),
        'adverse': ( 
            "That is fully unacceptable and I demand to talk with a supervisor!",
            "I am extraordinarily annoyed with the poor service high quality.",
            "This situation has been ongoing for weeks with out decision.",
            "Horrible expertise, worst customer support ever.",
            "I desire a full refund instantly, that is ridiculous."
        ),
        'impartial': (
            "I need assistance with my account settings please.",
            "Are you able to examine the standing of my current order?", 
            "What are what you are promoting hours and phone data?",
            "I've a query about billing and fee choices.",
            "Please assist me perceive the refund course of."
        )
    }
    
    knowledge = ()
    for _ in vary(n_samples):
        sentiment = np.random.alternative(('constructive', 'adverse', 'impartial'), 
                                   p=(0.4, 0.3, 0.3))  # Real looking distribution
        template = np.random.alternative(templates(sentiment))
        
        # Add some variation
        if np.random.random() < 0.2:  # 20% get account numbers
            template += f" My account quantity is {np.random.randint(100000, 999999)}."
        
        knowledge.append({
            'transcript': template,
            'sentiment': sentiment
        })
    
    df = pd.DataFrame(knowledge)
    print(f"📊 Created {len(df):,} pattern conversations")   
    print(f"📊 Sentiment distribution:n{df('sentiment').value_counts()}")  
    return df

# Execute area adaptation and classification coaching
coach = CustomerServiceTrainer()

# Create pattern knowledge (exchange along with your precise knowledge)
df = create_sample_data(5000)

# Cut up knowledge
X_train, X_val, y_train, y_val = train_test_split(
    df('transcript'), df('sentiment'), 
    test_size=0.2, stratify=df('sentiment'), random_state=42
)

# Run area adaptation
dapt_path = coach.domain_adaptation(
    df('transcript').tolist(), 
    PATHS('fashions')('domain_adapted'),
    epochs=2
)

# Practice classifier
mannequin, historical past = coach.train_classifier(
    X_train.tolist(), X_val.tolist(),
    y_train.tolist(), y_val.tolist(),
    dapt_path,
    PATHS('fashions')('classifier'),
    epochs=6
)

Schritt 2: Modellkomprimierung – die Reduzierung von 84% Größen

Für den Zaubertrick: Wir werden unser Modell um 84% komprimieren und gleichzeitig quick die gesamte Genauigkeit beibehalten. Dies macht den Einsatz von Edge ermöglicht.

Der wichtigste Einblick ist, dass die meisten neuronalen Netze überbetont werden. Sie verwenden 32-Bit-Gleitkomma-Zahlen, wenn 8-Bit-Ganzzahlen für die meisten Aufgaben intestine funktionieren. Es ist wie die Verwendung einer hochauflösenden Kamera, wenn eine Telefonkamera Ihnen das gleiche Ergebnis für soziale Medien liefert.

class ModelCompressor:
    """ONNX-based mannequin compression with complete validation"""
    
    def __init__(self, model_path):
        self.model_path = model_path
        self.tokenizer = DistilBertTokenizerFast.from_pretrained(model_path)
        print(f"🗜️ Compressor prepared for {model_path}")
    
    def compress_to_onnx(self, fp32_output, quantized_output):
        """
        Two-step course of:
        1. Convert TensorFlow mannequin to ONNX (cross-platform format)
        2. Apply dynamic INT8 quantization (no calibration wanted)
        """
        from optimum.onnxruntime import ORTModelForSequenceClassification
        from onnxruntime.quantization import quantize_dynamic, QuantType
        
        print("📋 Step 1: Changing to ONNX format...")
        
        # Export to ONNX (this makes the mannequin transportable throughout platforms)
        ort_model = ORTModelForSequenceClassification.from_pretrained(
            self.model_path, export=True, supplier="CPUExecutionProvider"
        )
        ort_model.save_pretrained(os.path.dirname(fp32_output))
        
        # Rename to our desired path
        generated_path = os.path.be part of(os.path.dirname(fp32_output), "mannequin.onnx")
        if os.path.exists(generated_path):
            os.rename(generated_path, fp32_output)
        
        fp32_size = os.path.getsize(fp32_output) / (1024**2)  # MB
        print(f"   📏 Unique ONNX mannequin: {fp32_size:.2f}MB")
        
        print("⚡ Step 2: Making use of dynamic INT8 quantization...")
        
        # Dynamic quantization - no calibration dataset wanted!
        quantize_dynamic(
            model_input=fp32_output,
            model_output=quantized_output,
            op_types_to_quantize=(QuantType.QInt8, QuantType.QUInt8),
            weight_type=QuantType.QInt8,
            optimize_model=False  # Maintain optimization separate
        )
        
        quantized_size = os.path.getsize(quantized_output) / (1024**2)  # MB
        compression_ratio = (fp32_size - quantized_size) / fp32_size * 100
        
        print(f"   📏 Quantized mannequin: {quantized_size:.2f}MB")   
        print(f"   🎯 Compression: {compression_ratio:.1f}% dimension discount")   
        
        return fp32_output, quantized_output, compression_ratio
    
    def benchmark_models(self, fp32_path, quantized_path, test_texts, test_labels):
        """
        Examine FP32 vs INT8 fashions on accuracy, pace, and dimension
        
        That is essential - we have to confirm our compression did not break something!
        """
        print("🧪 Benchmarking mannequin efficiency...")   
        
        outcomes = {}
        
        for title, model_path in (("FP32 Unique", fp32_path), ("INT8 Quantized", quantized_path)):
            print(f"   Testing {title}...")
            
            # Load mannequin for inference
            session = ort.InferenceSession(model_path, suppliers=("CPUExecutionProvider"))
            
            # Check on consultant pattern (500 examples for pace)
            test_sample = min(500, len(test_texts))
            correct_predictions = 0
            latencies = ()
            
            # Heat up the mannequin (vital for truthful timing!)
            warmup_text = "Thanks in your assist with my order at this time"
            warmup_encoding = self.tokenizer(
                warmup_text, padding="max_length", truncation=True,
                max_length=256, return_tensors="np"
            )
            
            for _ in vary(10):  # 10 warmup runs
                _ = session.run(None, {
                    "input_ids": warmup_encoding("input_ids"),
                    "attention_mask": warmup_encoding("attention_mask")
                })
            
            # Precise benchmarking
            for i in vary(test_sample):
                textual content, true_label = test_texts(i), test_labels(i)
                
                encoding = self.tokenizer(
                    textual content, padding="max_length", truncation=True,
                    max_length=256, return_tensors="np"
                )
                
                # Time the inference
                start_time = time.perf_counter()
                outputs = session.run(None, {
                    "input_ids": encoding("input_ids"),
                    "attention_mask": encoding("attention_mask")
                })
                latency_ms = (time.perf_counter() - start_time) * 1000
                latencies.append(latency_ms)
                
                # Verify accuracy
                predicted_class = np.argmax(outputs(0))
                if predicted_class == true_label:
                    correct_predictions += 1
            
            # Calculate metrics
            accuracy = correct_predictions / test_sample
            mean_latency = np.imply(latencies)
            p95_latency = np.percentile(latencies, 95)
            model_size_mb = os.path.getsize(model_path) / (1024**2)
            
            outcomes(title) = {
                "accuracy": accuracy,
                "mean_latency_ms": mean_latency,
                "p95_latency_ms": p95_latency,
                "model_size_mb": model_size_mb,
                "throughput_qps": 1000 / mean_latency  # Queries per second
            }
            
            print(f"      ✓ Accuracy: {accuracy:.4f}")
            print(f"      ✓ Imply latency: {mean_latency:.2f}ms")
            print(f"      ✓ P95 latency: {p95_latency:.2f}ms")
            print(f"      ✓ Mannequin dimension: {model_size_mb:.2f}MB")
            print(f"      ✓ Throughput: {outcomes(title)('throughput_qps'):.1f} QPS")
        
        # Present the comparability
        if len(outcomes) == 2:
            fp32_results = outcomes("FP32 Unique") 
            int8_results = outcomes("INT8 Quantized")
            
            size_reduction = (1 - int8_results("model_size_mb") / fp32_results("model_size_mb")) * 100
            accuracy_retention = int8_results("accuracy") / fp32_results("accuracy")
            latency_change = ((int8_results("mean_latency_ms") - fp32_results("mean_latency_ms")) 
                             / fp32_results("mean_latency_ms")) * 100
            
            print(f"n🎯 Quantization Impression Abstract:")   
            print(f"   📦 Measurement discount: {size_reduction:.1f}%")  
            print(f"   🎯 Accuracy retention: {accuracy_retention:.1%}")  
            print(f"   ⚡ Latency change: {latency_change:+.1f}%")  
            print(f"   💾 Reminiscence saved: {fp32_results('model_size_mb') - int8_results('model_size_mb'):.1f}MB")    
        
        return outcomes

# Execute mannequin compression
compressor = ModelCompressor(PATHS('fashions')('classifier'))

# Compress the mannequin
fp32_path, quantized_path, compression_ratio = compressor.compress_to_onnx(
    PATHS('fashions')('onnx_fp32'),
    PATHS('fashions')('onnx_quantized')
)

# Load check knowledge and label encoder for benchmarking  
import joblib
label_encoder = joblib.load(f"{PATHS('fashions')('classifier')}/label_encoder.pkl")
test_labels_encoded = label_encoder.rework(y_val(:500))

# Benchmark the fashions
benchmark_results = compressor.benchmark_models(
    fp32_path, quantized_path,
    X_val(:500).tolist(), test_labels_encoded
)   

Schritt 3: Der Good Router – entscheidende Rand vs. Cloud

Hier geschieht die Hybridmagie. Unser Router analysiert jede Kundenabfrage und bestimmt, ob sie lokal (am Rand) umgehen oder an die Cloud weiterleitet. Betrachten Sie es als einen intelligenten Verkehrscontroller.

Der Router berücksichtigt fünf Faktoren:

  1. Textlänge – Längere Fragen bedeuten oft komplexe Probleme
  2. Satzstruktur – Mehrere Klauseln deuten nuancierte Probleme vor
  3. Emotionale Indikatoren – Wörter wie „frustrierte“ Signalkalationsbedürfnisse
  4. Modellvertrauen – Wenn sich die KI nicht sicher ist, Weg zur Cloud
  5. Eskalationsschlüsselwörter – „Supervisor“, „Beschwerde“ usw.
class IntelligentRouter:
    """
    Good routing system that maximizes edge utilization whereas sustaining high quality
    
    The core perception: 95% of buyer queries are routine and will be dealt with
    by a small, quick mannequin. The remaining 5% want the complete energy of the cloud.
    """
    
    def __init__(self, edge_model_path, cloud_model_path, tokenizer_path):
        # Load each fashions
        self.edge_session = ort.InferenceSession(
            edge_model_path, suppliers=("CPUExecutionProvider")
        )
        self.cloud_session = ort.InferenceSession(
            cloud_model_path, suppliers=("CPUExecutionProvider")  # Also can use GPU
        )
        
        # Load tokenizer and label encoder
        self.tokenizer = DistilBertTokenizerFast.from_pretrained(tokenizer_path)
        import joblib
        self.label_encoder = joblib.load(f"{tokenizer_path}/label_encoder.pkl")
        
        # Routing configuration (tuned by experimentation)
        self.complexity_threshold = 0.75    # Path to cloud if complexity > 0.75
        self.confidence_threshold = 0.90    # Path to cloud if confidence < 0.90
        self.edge_preference = 0.95         # 95% desire for edge when potential
        
        # Price monitoring (practical cloud pricing)
        self.prices = {
            "edge": 0.001,   # $0.001 per inference on edge
            "cloud": 0.0136  # $0.0136 per inference on cloud (OpenAI-like pricing)
        }
        
        # Efficiency metrics
        self.metrics = {
            "total_requests": 0,
            "edge_requests": 0,
            "cloud_requests": 0,
            "total_cost": 0.0,
            "routing_reasons": {}
        }
        
        print("🧠 Good router initialized")
        print(f" Complexity threshold: {self.complexity_threshold}")
        print(f" Confidence threshold: {self.confidence_threshold}")
        print(f" Cloud/edge value ratio: {self.prices('cloud')/self.prices('edge'):.1f}x")
    
    def analyze_complexity(self, textual content, model_confidence):
        """
        Multi-dimensional complexity evaluation
        
        That is the guts of our routing logic. We have a look at a number of indicators
        to find out if a question wants the complete energy of the cloud mannequin.
        """
        
        # Issue 1: Size complexity (normalized by typical buyer messages)
        # Longer messages typically point out extra advanced points
        length_score = min(len(textual content) / 200, 1.0)  # 200 chars = typical message
        
        # Issue 2: Syntactic complexity (sentence construction)
        sentences = (s.strip() for s in textual content.cut up('.') if s.strip())
        phrases = textual content.cut up()
        
        if sentences and phrases:
            avg_sentence_length = len(phrases) / len(sentences)
            syntax_score = min(avg_sentence_length / 15, 1.0)  # 15 phrases = common
        else:
            syntax_score = 0.0
        
        # Issue 3: Mannequin uncertainty (inverse of confidence)
        # If the mannequin is not assured, it is most likely a posh case
        uncertainty_score = 1 - abs(2 * model_confidence - 1)
        
        # Issue 4: Escalation/emotional key phrases
        escalation_keywords = (
            'annoyed', 'indignant', 'unacceptable', 'supervisor', 'supervisor',
            'grievance', 'horrible', 'terrible', 'disgusted', 'livid'
        )
        
        keyword_matches = sum(1 for phrase in escalation_keywords if phrase in textual content.decrease())
        emotion_score = min(keyword_matches / 3, 1.0)  # Normalize to 0-1
        
        # Weighted mixture (weights tuned by experimentation)
        complexity = (
            0.3 * length_score +      # Size issues most
            0.3 * syntax_score +      # Construction is vital  
            0.2 * uncertainty_score + # Mannequin confidence
            0.2 * emotion_score       # Emotional indicators
        )
        
        return complexity, {
            'size': length_score,
            'syntax': syntax_score,
            'uncertainty': uncertainty_score,
            'emotion': emotion_score,
            'keyword_matches': keyword_matches
        }
    
    def route_queries(self, queries):
        """
        Predominant routing pipeline
        
        1. Get preliminary predictions from cloud mannequin (for confidence scores)
        2. Analyze complexity of every question
        3. Route easy queries to edge, advanced ones keep on cloud
        4. Return outcomes with routing choices logged
        """
        print(f" Routing {len(queries)} buyer queries...")
        
        # Step 1: Get cloud predictions for complexity evaluation
        cloud_predictions = self._run_inference(self.cloud_session, queries, "cloud")
        
        # Step 2: Analyze every question and make routing choices
        edge_queries = ()
        edge_indices = ()
        routing_decisions = ()
        
        for i, (question, cloud_result) in enumerate(zip(queries, cloud_predictions)):
            if "error" in cloud_result:
                # If cloud failed, drive to edge as fallback
                resolution = {
                    "route": "edge", 
                    "cause": "cloud_error",
                    "complexity": 0.0, 
                    "confidence": 0.0
                }
                edge_queries.append(question)
                edge_indices.append(i)
            else:
                # Analyze complexity
                complexity, breakdown = self.analyze_complexity(
                    question, cloud_result("confidence")
                )
                
                # Make routing resolution
                should_use_edge = (
                    complexity <= self.complexity_threshold and
                    cloud_result("confidence") >= self.confidence_threshold and
                    np.random.random() < self.edge_preference
                )
                
                # Decide cause for routing resolution
                if should_use_edge:
                    cause = "optimal_edge"
                    edge_queries.append(question)
                    edge_indices.append(i)
                else:
                    if complexity > self.complexity_threshold:
                        cause = "high_complexity"
                    elif cloud_result("confidence") < self.confidence_threshold:
                        cause = "low_confidence"
                    else:
                        cause = "random_cloud"
                
                resolution = {
                    "route": "edge" if should_use_edge else "cloud",
                    "cause": cause,
                    "complexity": complexity,
                    "confidence": cloud_result("confidence"),
                    "breakdown": breakdown
                }
            
            routing_decisions.append(resolution)
        
        # Step 3: Run edge inference for chosen queries
        if edge_queries:
            edge_results = self._run_inference(self.edge_session, edge_queries, "edge")
            
            # Change cloud outcomes with edge outcomes for routed queries
            for idx, edge_result in zip(edge_indices, edge_results):
                cloud_predictions(idx) = edge_result
        
        # Step 4: Add routing metadata and prices
        for i, (consequence, resolution) in enumerate(zip(cloud_predictions, routing_decisions)):
            consequence.replace(resolution)
            consequence("value") = self.prices(resolution("route"))
        
        # Step 5: Replace metrics
        edge_count = len(edge_queries)
        cloud_count = len(queries) - edge_count
        
        self.metrics("total_requests") += len(queries)
        self.metrics("edge_requests") += edge_count
        self.metrics("cloud_requests") += cloud_count
        
        batch_cost = edge_count * self.prices("edge") + cloud_count * self.prices("cloud")
        self.metrics("total_cost") += batch_cost
        
        # Monitor routing causes
        for resolution in routing_decisions:
            cause = resolution("cause")
            self.metrics("routing_reasons")(cause) = (
                self.metrics("routing_reasons").get(cause, 0) + 1
            )
        
        print(f" Routed: {edge_count} edge, {cloud_count} cloud")
        print(f" Batch value: ${batch_cost:.4f}")
        print(f" Edge utilization: {edge_count/len(queries):.1%}")
        
        return cloud_predictions, {
            "total_queries": len(queries),
            "edge_utilization": edge_count / len(queries),
            "batch_cost": batch_cost,
            "avg_complexity": np.imply((d("complexity") for d in routing_decisions))
        }
    
    def _run_inference(self, session, texts, supply):
        """Run batch inference with error dealing with"""
        strive:
            # Tokenize all texts
            encodings = self.tokenizer(
                texts, padding="max_length", truncation=True,
                max_length=256, return_tensors="np"
            )
            
            # Run inference
            outputs = session.run(None, {
                "input_ids": encodings("input_ids"),
                "attention_mask": encodings("attention_mask")
            })
            
            # Course of outcomes
            outcomes = ()
            for i, logits in enumerate(outputs(0)):
                predicted_class = int(np.argmax(logits))
                confidence = float(np.max(self._softmax(logits)))
                predicted_sentiment = self.label_encoder.inverse_transform((predicted_class))(0)
                
                outcomes.append({
                    "textual content": texts(i),
                    "predicted_class": predicted_class,
                    "predicted_sentiment": predicted_sentiment,
                    "confidence": confidence,
                    "processing_location": supply
                })
            
            return outcomes
            
        besides Exception as e:
            # Return error outcomes
            return ({"textual content": textual content, "error": str(e), "processing_location": supply} 
                   for textual content in texts)
    
    def _softmax(self, x):
        """Convert logits to possibilities"""
        exp_x = np.exp(x - np.max(x))
        return exp_x / np.sum(exp_x)
    
    def get_system_stats(self):
        """Get complete system statistics"""
        if self.metrics("total_requests") == 0:
            return {"error": "No requests processed"}
        
        # Calculate value financial savings vs cloud-only
        cloud_only_cost = self.metrics("total_requests") * self.prices("cloud")
        actual_cost = self.metrics("total_cost")
        savings_percent = (cloud_only_cost - actual_cost) / cloud_only_cost * 100
        
        return {
            "total_queries_processed": self.metrics("total_requests"),
            "edge_utilization": self.metrics("edge_requests") / self.metrics("total_requests"),
            "cloud_utilization": self.metrics("cloud_requests") / self.metrics("total_requests"), 
            "total_cost": self.metrics("total_cost"),
            "cost_per_query": self.metrics("total_cost") / self.metrics("total_requests"),
            "cost_savings_percent": savings_percent,
            "routing_reasons": dict(self.metrics("routing_reasons")),
            "estimated_monthly_savings": (cloud_only_cost - actual_cost) * 30
        }

# Initialize the router
router = IntelligentRouter(
    edge_model_path=PATHS('fashions')('onnx_quantized'),
    cloud_model_path=PATHS('fashions')('onnx_fp32'), 
    tokenizer_path=PATHS('fashions')('classifier')
)

# Check with practical buyer queries
test_queries = (
    "Thanks a lot for the wonderful customer support at this time!",
    "I am extraordinarily annoyed with this ongoing billing situation that has been occurring for 3 months regardless of a number of calls to your help group who appear fully unable to resolve these advanced account synchronization issues.",
    "Are you able to please assist me examine my order standing?",
    "What's your return coverage for faulty merchandise?",
    "That is fully unacceptable and I demand to talk with a supervisor instantly about these billing errors!",
    "My account quantity is 123456789 and I need assistance with the improve course of.",
    "Good day, I've a fast query about my current buy.",
    "The technical help group was unable to resolve my connectivity situation and I would like escalation to a senior specialist who can deal with enterprise community configuration issues."
)

# Route the queries
outcomes, batch_metrics = router.route_queries(test_queries)

# Show detailed outcomes
print(f"n DETAILED ROUTING ANALYSIS:")
for i, (question, consequence) in enumerate(zip(test_queries, outcomes)):
    route = consequence.get("processing_location", "unknown").higher()
    sentiment = consequence.get("predicted_sentiment", "unknown")
    confidence = consequence.get("confidence", 0)
    complexity = consequence.get("complexity", 0)
    cause = consequence.get("cause", "unknown")
    value = consequence.get("value", 0)
    
    print(f"nQuery {i+1}: "{question(:60)}..."")
    print(f"   Route: {route} (cause: {cause})")
    print(f"   Sentiment: {sentiment} (confidence: {confidence:.3f})")
    print(f"   Complexity: {complexity:.3f}")
    print(f"   Price: ${value:.6f}")

# Present system-wide efficiency
system_stats = router.get_system_stats()
print(f"n SYSTEM PERFORMANCE SUMMARY:")
print(f"   Whole queries: {system_stats('total_queries_processed')}")
print(f"   Edge utilization: {system_stats('edge_utilization'):.1%}")  
print(f"   Price per question: ${system_stats('cost_per_query'):.6f}")
print(f"   Price financial savings: {system_stats('cost_savings_percent'):.1f}%")
print(f"   Month-to-month financial savings estimate: ${system_stats('estimated_monthly_savings'):.2f}")

Schritt 4: Produktionsüberwachung – Halten Sie es gesund

Ein System ohne Überwachung ist ein System, das darauf wartet, zu scheitern. Unser Überwachungsaufbau ist leicht, aber wirksam, um die wichtigsten Probleme zu erfassen: Genauigkeitsabfälle, Kostenspitzen und Routingprobleme.

class ProductionMonitor:
    """
    Light-weight manufacturing monitoring for hybrid AI programs
    
    Tracks the metrics that really matter for enterprise outcomes:
    - Edge utilization (value affect)
    - Accuracy tendencies (high quality affect) 
    - Latency distribution (consumer expertise affect)
    - Price per question (funds affect)
    """
    
    def __init__(self, alert_thresholds=None):
        # Set wise defaults for alerts
        self.thresholds = alert_thresholds or {
            "min_edge_utilization": 0.80,  # Alert if < 80% edge utilization
            "min_accuracy": 0.85,          # Alert if accuracy drops under 85%
            "max_cost_per_query": 0.01,   # Alert if value > $0.01 per question
            "max_p95_latency": 150         # Alert if P95 latency > 150ms
        }
        
        # Environment friendly storage with ring buffers (memory-bounded)
        self.metrics_history = deque(maxlen=10000)  # ~1 week at 1 batch/minute
        self.alerts = ()
        
        print(" Manufacturing monitoring initialized")
        print(f"   Thresholds: {self.thresholds}")
    
    def log_batch(self, batch_metrics, accuracy=None, latencies=None):
        """
        Report batch efficiency and examine for points
        
        This will get known as after each batch of queries is processed.
        """
        timestamp = time.time()
        
        # Create efficiency report
        report = {
            "timestamp": timestamp,
            "edge_utilization": batch_metrics("edge_utilization"),
            "total_cost": batch_metrics("batch_cost"),
            "avg_complexity": batch_metrics.get("avg_complexity", 0),
            "query_count": batch_metrics("total_queries"),
            "accuracy": accuracy
        }
        
        # Add latency stats if offered
        if latencies:
            report.replace({
                "mean_latency": np.imply(latencies),
                "p95_latency": np.percentile(latencies, 95),
                "p99_latency": np.percentile(latencies, 99)
            })
        
        self.metrics_history.append(report)
        
        # Verify for alerts
        alerts = self._check_alerts(report)
        self.alerts.lengthen(alerts)
        
        if alerts:
            for alert in alerts:
                print(f" ALERT: {alert}")
    
    def _check_alerts(self, report):
        """Verify present metrics towards thresholds"""
        alerts = ()
        
        # Edge utilization alert
        if report("edge_utilization") < self.thresholds("min_edge_utilization"):
            alerts.append(
                f"Low edge utilization: {report('edge_utilization'):.1%} "
                f"< {self.thresholds('min_edge_utilization'):.1%}"
            )
        
        # Accuracy alert
        if report.get("accuracy") and report("accuracy") < self.thresholds("min_accuracy"):
            alerts.append(
                f"Low accuracy: {report('accuracy'):.3f} "
                f"< {self.thresholds('min_accuracy'):.3f}"
            )
        
        # Price alert
        cost_per_query = report("total_cost") / report("query_count")
        if cost_per_query > self.thresholds("max_cost_per_query"):
            alerts.append(
                f"Excessive value per question: ${cost_per_query:.4f} "
                f"> ${self.thresholds('max_cost_per_query'):.4f}"
            )
        
        # Latency alert
        if report.get("p95_latency") and report("p95_latency") > self.thresholds("max_p95_latency"):
            alerts.append(
                f"Excessive P95 latency: {report('p95_latency'):.1f}ms "
                f"> {self.thresholds('max_p95_latency')}ms"
            )
        
        return alerts
    
    def generate_health_report(self):
        """Generate complete system well being report"""
        if not self.metrics_history:
            return {"standing": "No knowledge out there"}
        
        # Analyze current efficiency (final 100 batches or 24 hours)
        now = time.time()
        recent_cutoff = now - (24 * 3600)  # 24 hours in the past
        
        recent_records = (
            r for r in self.metrics_history 
            if r("timestamp") > recent_cutoff
        )
        
        if not recent_records:
            recent_records = checklist(self.metrics_history)(-100:)  # Final 100 batches
        
        # Calculate key metrics
        total_queries = sum(r("query_count") for r in recent_records)
        total_cost = sum(r("total_cost") for r in recent_records)
        
        # Efficiency averages
        avg_metrics = {
            "edge_utilization": np.imply((r("edge_utilization") for r in recent_records)),
            "cost_per_query": total_cost / total_queries if total_queries > 0 else 0,
            "avg_complexity": np.imply((r.get("avg_complexity", 0) for r in recent_records))
        }
        
        # Accuracy evaluation (if out there)
        accuracy_records = (r("accuracy") for r in recent_records if r.get("accuracy"))
        if accuracy_records:
            avg_metrics.replace({
                "current_accuracy": accuracy_records(-1),
                "avg_accuracy": np.imply(accuracy_records),
                "accuracy_trend": self._calculate_trend(accuracy_records(-10:))
            })
        
        # Latency evaluation (if out there)  
        latency_records = (r.get("p95_latency") for r in recent_records if r.get("p95_latency"))
        if latency_records:
            avg_metrics.replace({
                "current_p95_latency": latency_records(-1),
                "avg_p95_latency": np.imply(latency_records),
                "latency_trend": self._calculate_trend(latency_records(-10:))
            })
        
        # Current alerts
        recent_alert_count = len(self.alerts) if self.alerts else 0
        
        # Total well being evaluation
        health_score = self._calculate_health_score(avg_metrics, recent_alert_count)
        
        return {
            "timestamp": now,
            "period_analyzed": f"{len(recent_records)} batches ({total_queries:,} queries)",
            "health_score": health_score,
            "health_status": self._get_health_status(health_score),
            "performance_metrics": avg_metrics,
            "recent_alerts": recent_alert_count,
            "suggestions": self._generate_recommendations(avg_metrics, recent_alert_count),
            "cost_analysis": {
                "total_cost_analyzed": total_cost,
                "daily_cost_estimate": total_cost * (86400 / (24 * 3600)),  # Scale to each day
                "monthly_cost_estimate": total_cost * (86400 * 30 / (24 * 3600))
            }
        }
    
    def _calculate_trend(self, values, min_samples=3):
        """Calculate if metrics are bettering, steady, or declining"""
        if len(values) < min_samples:
            return "insufficient_data"
        
        # Easy linear regression slope
        x = np.arange(len(values))
        slope = np.polyfit(x, values, 1)(0)
        
        # Decide significance
        std_dev = np.std(values)
        threshold = std_dev * 0.1  # 10% of std dev
        
        if abs(slope) < threshold:
            return "steady"
        elif slope > 0:
            return "bettering" 
        else:
            return "declining"
    
    def _calculate_health_score(self, metrics, alert_count):
        """Calculate total system well being (0-100)"""
        rating = 100
        
        # Penalize primarily based on metrics
        if metrics("edge_utilization") < 0.9:
            rating -= 10  # Edge utilization penalty
        if metrics("edge_utilization") < 0.8:
            rating -= 20  # Extreme edge utilization penalty
            
        if metrics.get("current_accuracy", 1.0) < 0.9:
            rating -= 15  # Accuracy penalty
        if metrics.get("current_accuracy", 1.0) < 0.8:
            rating -= 30  # Extreme accuracy penalty
            
        # Alert penalty
        rating -= min(alert_count * 5, 30)  # Max 30 level penalty for alerts
        
        return max(0, rating)
    
    def _get_health_status(self, rating):
        """Convert numeric well being rating to standing"""
        if rating >= 90:
            return "wonderful"
        elif rating >= 75:
            return "good"
        elif rating >= 60:
            return "truthful"
        elif rating >= 40:
            return "poor"
        else:
            return "important"
    
    def _generate_recommendations(self, metrics, alert_count):
        """Generate actionable suggestions"""
        suggestions = ()
        
        if metrics("edge_utilization") < 0.8:
            suggestions.append(
                f"Low edge utilization ({metrics('edge_utilization'):.1%}): "
                "Take into account decreasing complexity threshold or confidence threshold"
            )
        
        if metrics.get("current_accuracy", 1.0) < 0.85:
            suggestions.append(
                f"Low accuracy ({metrics.get('current_accuracy', 0):.3f}): "
                "Assessment mannequin efficiency and take into account retraining"
            )
        
        if metrics("cost_per_query") > 0.005:  # > $0.005 per question
            suggestions.append(
                f"Excessive value per question (${metrics('cost_per_query'):.4f}): "
                "Enhance edge utilization to cut back prices"
            )
        
        if alert_count > 5:
            suggestions.append(
                f"Excessive alert quantity ({alert_count}): "
                "Assessment alert thresholds and tackle underlying points"
            )
        
        if not suggestions:
            suggestions.append("System working inside regular parameters")
        
        return suggestions

# Initialize monitoring
monitor = ProductionMonitor()

# Log our batch efficiency
monitor.log_batch(batch_metrics)

# Generate well being report
health_report = monitor.generate_health_report()

print(f"n SYSTEM HEALTH REPORT:")
print(f" Well being Standing: {health_report('health_status').higher()} ({health_report('health_score')}/100)")
print(f" Interval: {health_report('period_analyzed')}")

print(f"n Key Metrics:")
for metric, worth in health_report('performance_metrics').objects():
    if isinstance(worth, float):
        if 'utilization' in metric:
            print(f"   {metric}: {worth:.1%}")
        elif 'value' in metric:
            print(f"   {metric}: ${worth:.4f}")
        else:
            print(f"   {metric}: {worth:.3f}")
    else:
        print(f"   {metric}: {worth}")

print(f"n Price Evaluation:")
for metric, worth in health_report('cost_analysis').objects():
    print(f"   {metric}: ${worth:.4f}")

print(f"n Suggestions:")
for i, rec in enumerate(health_report('suggestions'), 1):
    print(f"   {i}. {rec}")

Was wir gebaut haben: Ein produktionsbereites System

Machen wir einen Schritt zurück und schätzen, was wir erreicht haben:

  1. Domänen angepasster Modell Das versteht die Kundendienstsprache
  2. 84% kleineres quantisiertes Modell Das läuft auf Commonplace -CPU -{Hardware}
  3. Good Router Das verarbeitet 95% der Abfragen lokal
  4. Produktionsüberwachung Das fängt Probleme auf, bevor sie sich auf Benutzer auswirken

So sehen die Zahlen in der Praxis aus:

# Let's summarize our system's efficiency
print("🎯 HYBRID EDGE-CLOUD AI SYSTEM PERFORMANCE")
print("=" * 50)

# Mannequin compression outcomes
fp32_size = benchmark_results("FP32 Unique")("model_size_mb")
int8_size = benchmark_results("INT8 Quantized")("model_size_mb") 
compression_ratio = (1 - int8_size/fp32_size) * 100

print(f" Mannequin Compression:")
print(f"   Unique dimension: {fp32_size:.1f}MB")
print(f"   Quantized dimension: {int8_size:.1f}MB")
print(f"   Compression: {compression_ratio:.1f}%")

# Accuracy retention
fp32_acc = benchmark_results("FP32 Unique")("accuracy")
int8_acc = benchmark_results("INT8 Quantized")("accuracy")
accuracy_retention = int8_acc / fp32_acc * 100

print(f"n Accuracy:")
print(f"   Unique accuracy: {fp32_acc:.3f}")
print(f"   Quantized accuracy: {int8_acc:.3f}")  
print(f"   Retention: {accuracy_retention:.1f}%")

# Efficiency metrics
fp32_latency = benchmark_results("FP32 Unique")("mean_latency_ms")
int8_latency = benchmark_results("INT8 Quantized")("mean_latency_ms")

print(f"n Efficiency:")
print(f"   FP32 imply latency: {fp32_latency:.1f}ms")
print(f"   INT8 imply latency: {int8_latency:.1f}ms")
print(f"   FP32 P95 latency: {benchmark_results('FP32 Unique')('p95_latency_ms'):.1f}ms")
print(f"   INT8 P95 latency: {benchmark_results('INT8 Quantized')('p95_latency_ms'):.1f}ms")

# Routing and value metrics  
system_stats = router.get_system_stats()
print(f"n Routing Effectivity:")
print(f"   Edge utilization: {system_stats('edge_utilization'):.1%}")
print(f"   Price financial savings: {system_stats('cost_savings_percent'):.1f}%")
print(f"   Price per question: ${system_stats('cost_per_query'):.6f}")

# System well being
print(f"n System Well being:")
print(f"   Standing: {health_report('health_status').higher()}")
print(f"   Rating: {health_report('health_score')}/100")
print(f"   Current alerts: {health_report('recent_alerts')}")

print("n" + "=" * 50)

Wichtige Take -Aways und nächste Schritte

Wir haben etwas Praktisches aufgebaut: ein hybrides KI-System, das Cloud-Qualitätsergebnisse zu Kosten und Latenzen auf Kantenebene liefert. Hier ist, was es funktioniert:

Die 95/5 Regel: Die meisten Kundenanfragen sind Routine. Ein intestine abgestimmtes kleines Modell kann sie perfekt umgehen und nur die wirklich komplexen Fälle für die Wolke hinterlassen.

Kompression ohne Kompromiss: Dynamic Int8 Quantisierung erreicht eine Größe von 84% mit minimaler Genauigkeitsverlust, wodurch die Notwendigkeit von Kalibrierungsdatensätzen beseitigt wird.

Intelligentes Routing: Unsere mehrdimensionale Komplexitätsanalyse stellt sicher, dass Abfragen aus den richtigen Gründen am richtigen Ort gehen.

Produktionsüberwachung: Einfache Warnungen für die wichtigsten Metriken halten das System gesund in der Produktion.

Wohin von hier aus gehen soll

Fang klein: Bereitstellen Sie zuerst in einer Teilmenge Ihres Datenverkehrs. Validieren Sie die Ergebnisse entsprechen Ihren Erwartungen, bevor Sie sich vergrößern.

Allmählich einstellen: Passen Sie die Routing-Schwellenwerte wöchentlich an, basierend auf Ihrer spezifischen Qualität im Vergleich zu den Kosten-Kompromisse.

Skalieren nachdenklich: Fügen Sie mehr Kantenknoten hinzu, wenn der Verkehr wächst. Die Architektur skaliert horizontal.

Lernen Sie weiter: Überwachen Sie Routing -Entscheidungen und Genauigkeitstrends. Die Daten leiten Ihre nächsten Optimierungen.

Das größere Bild

Hier geht es nicht nur um Kontaktzentren oder den Kundendienst. Das gleiche Muster funktioniert überall, wo Sie haben:

  • Routineanfragen mit hohem Volumen gemischt mit gelegentlichen komplexen Fällen
  • Kostensensitivität und Latenzanforderungen
  • Compliance- oder Datensouveränitätsbedenken

Denken Sie an Ihre eigenen KI -Anwendungen. Wie viele sind wirklich komplex gegenüber Routine? Wir wetten, dass die meisten der 95/5 -Regel folgen und sie perfekte Kandidaten für diesen hybriden Ansatz machen.

In der Zukunft von AI geht es nicht um größere Modelle – es geht um intelligentere Architekturen. Systeme, die mehr mit weniger tun, Daten dort halten, wo sie hingehören, und kosten das, was Sie sich leisten können, um zu bezahlen.

Bereit, es selbst zu versuchen? Der vollständige Code ist in diesem Artikel verfügbar. Beginnen Sie mit Ihren eigenen Daten, befolgen Sie die Setup -Anweisungen und sehen Sie, wie Ihr 95/5 Cut up aussieht.

*Alle Bilder, sofern nicht anders angegeben, stammen vom Autor.

Referenzen und Ressourcen

  • Forschungspapier: „Vergleichende Analyse von Edge vs. Cloud Contact Middle -Bereitstellungen: Eine technische und architektonische Perspektive“ – IEEE ICECCE 2025
  • Komplettes Notizbuch: Alle Code aus diesem Artikel sind als reproduzierbares Jupyter -Notizbuch verfügbar
  • Umgebungsspezifikationen: Intel Xeon Silver 4314, 64 GB RAM, Ubuntu 22.04, Python 3.10

Das hier beschriebene System repräsentiert unabhängige Forschung und ist nicht mit einem Arbeitgeber oder einer gewerblichen Einheit verbunden. Die Ergebnisse können je nach {Hardware}, Dateneigenschaften und domänenspezifischen Faktoren variieren.

Möchtest du Um die Implementierungsdetails zu besprechen oder Ihre Ergebnisse zu teilen? Bitte zögern Sie nicht, sich zu verbinden mit mir in den Kommentaren unten.

Von admin

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert