Deploying JAIS AI: Docker vs Native Performance Analysis with Python Implementation

Building a high-performance Arabic-English AI deployment solution with benchmarking


The JAIS (Jebel Jais) AI model represents a breakthrough in bilingual Arabic-English language processing, developed by Inception AI, MBZUAI, and Cerebras Systems. This post details the implementation of a production-ready deployment solution with comprehensive performance analysis comparing Docker containerization versus native Metal GPU acceleration.

In this project, I used a model provided by mradermacher/jais-family-30b-16k-chat-i1-GGUF, a recognized quantization specialist in the community. The mradermacher quantized version was chosen because:

  • iMatrix Quantization: Advanced i1-Q4_K_M provides superior quality vs static quantization. Research shows that weighted/imatrix quants offer significantly better model quality than classical static quants at the same quantization level
  • GGUF Format: Optimized for llama.cpp inference with Metal GPU acceleration
  • Balanced Performance: Q4_K_M offers the ideal speed/quality/size ratio (25.97 GiB)
  • Production Ready: Pre-quantized and extensively tested for deployment
  • Community Trusted: mradermacher is known for creating high-quality quantizations with automated processes and extensive testing
  • Superior Multilingual Performance: Studies indicate that English imatrix datasets show better results even for non-English inference, as most base models are primarily trained on English

Solution Architecture

The deployment solution consists of several key components designed for maximum flexibility and performance:

Project Structure

jais-ai-docker/
├── run.sh                      # Main server launcher
├── test.sh                     # Comprehensive test suite  
├── build.sh                    # Build system (Docker/Native)
├── cleanup.sh                  # Project cleanup utilities
├── Dockerfile                  # ARM64 optimized container
├── src/
│   ├── app.py                  # Flask API server
│   ├── model_loader.py         # GGUF model loader with auto-detection
│   └── requirements.txt        # Python dependencies
├── config/
│   └── performance_config.json # Performance presets
└── models/
    └── jais-family-30b-16k-chat.i1-Q4_K_M.gguf  # Quantized model

Python Implementation Overview

Flask API Server

The core server implements a robust Flask application with proper error handling and environment detection:

# Configuration with environment variable support
MODEL_PATH = os.environ.get("MODEL_PATH", "/app/models/jais-family-30b-16k-chat.i1-Q4_K_M.gguf")
CONFIG_PATH = os.environ.get("CONFIG_PATH", "/app/config/performance_config.json")

@app.route('/chat', methods=['POST'])
def chat():
    """Main chat endpoint with comprehensive error handling."""
    if not model_loaded:
        return jsonify({"error": "Model not loaded"}), 503
    
    try:
        data = request.json
        message = data.get('message', '')
        max_tokens = data.get('max_tokens', 100)
        
        # Generate response with timing
        start_time = time.time()
        response_data = jais_loader.generate_response(message, max_tokens=max_tokens)
        generation_time = time.time() - start_time
        
        # Add performance metrics
        response_data['generation_time_seconds'] = round(generation_time, 3)
        response_data['model_load_time_seconds'] = round(model_load_time, 3)
        
        return jsonify(response_data)
        
    except Exception as e:
        logger.error(f"Error in chat endpoint: {e}")
        return jsonify({"error": str(e)}), 500

Key Features:

  • Environment Variable Configuration: Flexible path configuration for different deployment modes
  • Performance Metrics: Built-in timing for load time and generation speed
  • Error Handling: Comprehensive exception handling with proper HTTP status codes
  • Health Checks: Monitoring endpoint for deployment orchestration

Complete Flask implementation: src/app.py

Smart Model Loader

The model loader implements intelligent environment detection and optimal configuration:

class JaisModelLoader:
    """
    Optimized model loader for mradermacher Jais AI GGUF models with proper error handling
    and resource management.
    """
    
    def _detect_runtime_environment(self) -> str:
        """Auto-detect the runtime environment and return optimal performance mode."""
        # Check if running in Docker container
        if os.path.exists('/.dockerenv') or os.path.exists('/proc/1/cgroup'):
            return 'docker'
        
        # Check if running natively on macOS with GGML_METAL environment variable
        if (platform.system() == 'Darwin' and 
            platform.machine() == 'arm64' and 
            os.environ.get('GGML_METAL') == '1'):
            return 'native_metal'
        
        return 'docker'  # Default fallback

    def _get_performance_preset(self) -> Dict[str, Any]:
        """Get optimized settings based on detected environment."""
        presets = {
            'native_metal': {
                'n_threads': 12,
                'n_ctx': 4096,
                'n_gpu_layers': -1,  # All layers to GPU
                'n_batch': 128,
                'use_metal': True
            },
            'docker': {
                'n_threads': 8,
                'n_ctx': 2048,
                'n_gpu_layers': 0,   # CPU only
                'n_batch': 64,
                'use_metal': False
            }
        }
        
        return presets.get(self.performance_mode, presets['docker'])

Key Innovations:

  • Automatic Environment Detection: Distinguishes between Docker and native execution
  • Performance Presets: Optimized configurations for each environment
  • Resource Management: Intelligent GPU/CPU allocation based on available hardware
  • Metal GPU Support: Full utilization of Apple Silicon capabilities

Complete model loader implementation: src/model_loader.py

Comprehensive Testing Framework

The testing framework provides automated performance benchmarking across deployment modes:

# Automated test execution
./test.sh performance  # Performance benchmarking
./test.sh full         # Complete functional testing
./test.sh quick        # Essential functionality tests

The test suite automatically detects running services and performs comprehensive evaluation with detailed metrics collection for tokens per second, response times, and system resource usage.

Complete test suite: test.sh

Performance Test Results and Analysis

Comprehensive benchmarking was conducted comparing Docker containerization versus native Metal GPU acceleration:

Test Environment

  • Hardware: Apple M4 Max
  • Model: JAIS 30B (Q4_K_M quantized, 25.97 GiB)
  • Tests: 5 different scenarios across languages and complexity levels

Performance Comparison Results

Test Scenario Docker (tok/s) Native Metal (tok/s) Speedup Performance Gain
Arabic Greeting 3.53 12.58 3.56x +256%
Creative Writing 3.93 13.06 3.32x +232%
Technical Explanation 4.08 12.98 3.18x +218%
Simple Greeting 2.54 10.24 4.03x +303%
Arabic Question 4.44 13.24 2.98x +198%

Average Performance Summary:

  • Docker CPU-only: 3.70 tokens/second
  • Native Metal GPU: 12.42 tokens/second
  • Overall Improvement: +235% performance gain

Configuration Analysis

Aspect Docker Container Native Metal
GPU Acceleration CPU-only Metal GPU (All 49 layers)
Threads 8 12
Context Window 2,048 tokens 4,096 tokens
Batch Size 64 128
Memory Usage 26.6 GB CPU 26.6 GB GPU + 0.3 GB CPU
Load Time ~5.2 seconds ~7.7 seconds

Testing Methodology

The testing approach followed controlled environment principles:

# Build and deploy Docker version
./build.sh docker --clean
./run.sh docker

# Run performance benchmarks
./test.sh performance

# Switch to native and repeat
docker stop jais-ai
./run.sh native
./test.sh performance

Test Design Principles:

  • Controlled Environment: Same hardware, same model, same prompts
  • Multiple Iterations: Each test repeated for consistency
  • Comprehensive Metrics: Token generation speed, total response time, memory usage
  • Language Diversity: Tests in both Arabic and English
  • Complexity Variation: From simple greetings to complex explanations

Key Findings and Recommendations

Performance Findings

  1. Native Metal provides 3.36x average speedup over Docker CPU-only
  2. Consistent performance gains across all test scenarios (2.98x – 4.03x)
  3. Metal GPU acceleration utilizes Apple Silicon effectively
  4. Docker offers portability with acceptable performance trade-offs

Deployment Recommendations

Use Native Metal When:

  • Maximum performance is critical
  • Interactive applications requiring low latency
  • Development and testing environments
  • Apple Silicon hardware available

Use Docker When:

  • Deploying to production servers
  • Cross-platform consistency required
  • Container orchestration needed
  • GPU resources unavailable

Technical Insights

  • Model Quantization: Q4_K_M provides optimal balance of speed/quality/size
  • Environment Detection: Automatic configuration prevents manual tuning
  • Resource Utilization: Full GPU offloading maximizes Apple Silicon capabilities
  • Production Readiness: Both deployments pass comprehensive functional tests

Repository and Resources

Complete Source Code: GitHub Repository

The repository includes full Python implementation with detailed comments, comprehensive test suite and benchmarking tools, Docker configuration and build scripts, performance analysis reports and metrics, deployment documentation and setup guides, and configuration presets for different environments.

Quick Start

git clone https://github.com/sarmadjari/jais-ai-docker
cd jais-ai-docker
./scripts/model_download.sh  # Download the model
./run.sh                     # Interactive mode selection

Conclusion

This implementation demonstrates effective deployment of large language models with optimal performance characteristics. The combination of intelligent environment detection, automated performance optimization, and comprehensive testing provides a robust foundation for production AI deployments.

The 3.36x performance improvement achieved through Metal GPU acceleration showcases the importance of hardware-optimized deployments, while Docker containerization ensures portability and scalability for diverse production environments.

The complete solution serves as a practical reference for deploying bilingual AI models with production-grade performance monitoring and testing capabilities.

This is just a start, I will keep tuning and hopefully updating the documentations as I get some time in the future.

Preparing for Azure’s Deprecation of TLS 1.0 and 1.1: What You Need to Know

Microsoft Azure is set to deprecate support for TLS (Transport Layer Security) versions 1.0 and 1.1 on 31st of October 2024. This move is part of Microsoft’s ongoing commitment to enhance security and ensure that only the most secure protocols are used across its services. As these older versions become obsolete, it’s crucial for businesses and developers to understand the impact of this change and prepare accordingly.


In this blog post, we’ll delve into:

  • Why Microsoft is deprecating TLS 1.0 and 1.1
  • What the deprecation means for your applications and services
  • Whether you need to update your Azure services or if the change is automatic
  • Potential impacts on your business and solutions
  • How to prepare for the transition with a comprehensive checklist

Why Is Microsoft Deprecating TLS 1.0 and 1.1?

Microsoft is deprecating TLS 1.0 and 1.1 to strengthen security and comply with industry standards. These older versions have known vulnerabilities and are less secure by today’s standards. By moving exclusively to TLS 1.2 and higher, Microsoft aims to:

  • Enhance Security Posture: TLS 1.2 and 1.3 offer stronger encryption algorithms, reducing the risk of data breaches and unauthorized access.
  • Meet Compliance Standards: Many regulations now mandate the use of secure protocols like TLS 1.2 or higher.
  • Promote Best Practices: Encouraging the adoption of modern security protocols ensures a safer ecosystem for all Azure users.

What Does This Mean for Your Applications and Services?

Azure’s Automatic Enforcement

Azure will automatically enforce the deprecation of TLS 1.0 and 1.1 on its services. While Azure handles the enforcement on its end, it’s essential to ensure that your applications and services interacting with Azure are compatible with TLS 1.2 or higher.

Customer Action Required

  • Updating Applications and Configurations: If your applications or services currently use TLS 1.0 or 1.1, you must update both your application code and SSL/TLS configurations to support TLS 1.2 or 1.3.
  • Certificates and Cipher Suites: Review and update your SSL/TLS certificates and cipher suites to ensure compatibility with TLS 1.2 or higher.

Do You Need to Update Your Azure Services, or Will It Happen Automatically?

While Azure services will be updated automatically to enforce TLS 1.2 and higher, customer applications and services will not be updated by Azure. You are responsible for:

  • Ensuring Compatibility: Update your applications, services, and any client-side components to support TLS 1.2 or higher.
  • Testing and Validation: Proactively test your systems to identify any issues arising from the deprecation of TLS 1.0 and 1.1.

Potential Impacts on Your Business and Solutions

Connectivity Issues

  • Service Disruptions: Applications or services not updated to support TLS 1.2 or higher may fail to connect to Azure services, leading to downtime.
  • Third-Party Dependencies: Integrations with third-party services or clients that still use older TLS versions may break.

Business Disruption

  • Operational Interruptions: Downtime can affect productivity, revenue, and customer satisfaction.
  • Compliance Risks: Non-compliance with security standards may result in penalties or legal issues.

Security Enhancements

  • Improved Data Protection: Stronger encryption methods protect data integrity and privacy.
  • Reduced Vulnerabilities: Eliminating outdated protocols minimizes the risk of security breaches.

How to Prepare: A Comprehensive Checklist

To ensure a smooth transition, follow this detailed checklist:

1. Inventory Your Systems

  • Identify Applications and Services: List all applications, services, and devices that connect to Azure.
  • Determine TLS Usage: Check which TLS versions are currently in use.

2. Update Applications and Services

Application Code and Configurations

  • Modify Application Code:
  • Update Libraries and Frameworks: Ensure you’re using updated versions that support TLS 1.2 or 1.3.
    • .NET Applications: Use .NET Framework 4.6 or higher.
    • Java Applications: Update to a JDK version that supports TLS 1.2 or 1.3.
    • Python Applications: Use Python 2.7.9+ or 3.4+.
  • Specify TLS Version: Explicitly set TLS 1.2 or higher in your application’s code or configuration files.
  • Configuration Settings:
  • Update Configuration Files: Modify files like web.config or appsettings.json to enforce TLS 1.2 or higher.
  • Enable Strong Cryptography: Adjust registry settings on Windows systems to enable strong cryptography.

Certificates and SSL/TLS Configurations

  • Review SSL/TLS Certificates:
  • Check Compatibility: Ensure certificates use strong encryption algorithms (e.g., SHA-256).
  • Renew if Necessary: Obtain new certificates if current ones are outdated.
  • Update Server SSL/TLS Settings:
  • Enable TLS 1.2/1.3 Protocols: Configure servers to support only TLS 1.2 and 1.3.
  • Configure Cipher Suites: Use strong cipher suites compatible with TLS 1.2 or higher.
  • Disable Deprecated Protocols: Explicitly disable TLS 1.0 and 1.1 in server settings.

3. Assess Third-Party Dependencies

  • Contact Vendors: Confirm that third-party services support TLS 1.2 or higher.
  • Update Integrations: Modify integrations using older TLS versions.
  • Replace Outdated Components: Find alternatives for components that don’t support newer TLS versions.

4. Review Certificates and Configurations

  • Check Certificate Chain: Ensure the entire chain is valid and uses strong encryption.
  • Test SSL/TLS Configurations: Use tools like SSL Labs’ SSL Server Test to analyze your server.

5. Test in a Staging Environment

  • Simulate the Environment: Disable TLS 1.0 and 1.1 in a test setting.
  • Comprehensive Testing: Test all functionalities and monitor for issues.
  • Monitor Logs and Errors: Identify any TLS-related errors.

6. Update Client Software

  • Ensure Client Compatibility: Verify that client software supports TLS 1.2 or higher.
  • Distribute Updates: Release updates for client applications as needed.
  • User Communication: Inform users about necessary updates.

7. Prepare Your Infrastructure

  • Update Server Software:
  • Operating Systems: Use OS versions that support TLS 1.2 or higher (e.g., Windows Server 2012 R2+).
  • Web Servers: Update IIS, Apache, Nginx, etc., to the latest versions.
  • Configure Network Devices:
  • Firewalls and Load Balancers: Ensure they support and are configured for TLS 1.2 or higher.
  • VPN Gateways: Update configurations to use secure protocols.

8. Plan the Transition

  • Set a Timeline: Schedule updates before Azure’s deprecation date.
  • Communicate Internally: Inform stakeholders about upcoming changes.
  • Risk Mitigation: Develop contingency and rollback plans.

9. Update Development and Deployment Tools

  • CI/CD Pipelines: Ensure tools are compatible with TLS 1.2 or higher.
  • SDKs and APIs: Update to the latest versions.
  • Automation Scripts: Review and update scripts interacting with Azure.

10. Monitor and Support

  • Implement Monitoring:
  • Set Up Alerts: Configure for TLS-related errors.
  • Continuous Monitoring: Use tools to track performance post-migration.
  • Provide Support Channels:
  • Support Teams: Train staff for TLS-related issues.
  • Documentation: Update to reflect changes.

Specific Steps to Update Applications and Certificates

Updating Applications

  • Audit Your Codebase: Look for instances where TLS versions are hard-coded.
  • Update Security Protocols:
  • .NET Example: Set ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12;
  • Java Example: Configure JVM with -Djdk.tls.client.protocols="TLSv1.2"
  • Test Third-Party Libraries: Ensure they support TLS 1.2 or higher.
  • Recompile Applications: Ensure changes take effect.

Updating Certificates

  • Verify Certificate Details: Check signature algorithms and key lengths.
  • Obtain New Certificates: If necessary, get new ones with stronger encryption.
  • Update Certificate Stores: Install new certificates on all relevant servers.

Conclusion

Azure’s deprecation of TLS 1.0 and 1.1 is a significant move towards enhancing security and ensuring that only the most secure protocols are used. While Azure will handle updates on its end, it’s crucial for you to:

  • Proactively Update: Ensure your applications, services, and certificates are compatible with TLS 1.2 or higher.
  • Thoroughly Test: Identify and resolve issues before they impact production.
  • Stay Informed: Keep abreast of Azure’s timelines and updates.

By taking these steps, you can mitigate risks associated with the deprecation, ensuring a smooth transition and maintaining uninterrupted access to Azure services.

Rethinking Microsoft’s Ecosystem: The Missing Piece

Microsoft has made significant strides in AI, cloud computing, and PC technologies, establishing itself as a leader in these domains. The introduction of PC+ Copilot is a testament to their innovative approach, leveraging AI to enhance user experience. However, there remains a crucial element that could elevate Microsoft’s ecosystem to new heights: mobile phones.

The Current Landscape

Microsoft’s ecosystem is robust, with cloud-ready applications like Microsoft 365 and Office 365 seamlessly integrating with AI-enabled PCs. This creates a powerful synergy between cloud services and desktop applications. However, the mobile segment is conspicuously absent from this ecosystem. While Microsoft has ventured into the mobile space before, the timing and strategy were perhaps misaligned with market demands. Today, with an open-minded and adaptive approach, Microsoft has the opportunity to rethink and reintegrate mobile phones into their ecosystem.

A New Vision: Microsoft-Integrated Android

Imagine a mobile operating system based on Android, but with deep integration of Microsoft products and services. This approach could offer several benefits:

  1. Familiarity and App Compatibility: By using Android as the base, Microsoft can ensure compatibility with the vast array of existing Android apps. This addresses the initial challenge of app availability that plagued their previous mobile efforts.
  2. Seamless Integration: Similar to how Microsoft revamped the Edge browser by adopting Chromium, they can create a mobile OS that integrates seamlessly with their cloud and PC ecosystem. Features like cross-device file sharing, universal clipboard, and cloud synchronization can provide a user experience on par with, or even surpassing, Apple’s ecosystem.
  3. Enhanced Productivity: With Office 365, OneDrive, and other Microsoft tools natively integrated, users can transition effortlessly between their desktop and mobile devices. This continuity boosts productivity and simplifies workflows for both consumers and enterprise users.

Building on the Success of Microsoft Edge

The success of Microsoft Edge is a prime example of how adopting a robust foundation and layering it with Microsoft’s unique value proposition can lead to a superior product. By transitioning Edge to the Chromium engine, Microsoft not only improved performance and compatibility but also added unique features that distinguished Edge from other browsers. Similarly, using Android as the foundation for a new mobile OS allows Microsoft to leverage the strengths of a well-established platform while infusing it with their own innovative features.

Marketing and Technological Benefits

Marketing

  1. Brand Loyalty: Offering a mobile solution that integrates perfectly with existing Microsoft products can strengthen brand loyalty. Users who rely on Microsoft for their PC and cloud needs will find it appealing to extend this trust to their mobile devices.
  2. Targeted Campaigns: Highlighting the benefits of a unified ecosystem in marketing campaigns can attract both individual consumers and businesses looking for a cohesive IT environment.
  3. Strategic Partnerships: Licensing this new mobile OS to various manufacturers can increase market penetration and provide diverse device options for consumers.

Technological

  1. Innovation Leadership: By combining the power of AI, cloud services, and mobile technology, Microsoft can position itself as a leader in technological innovation.
  2. Security Enhancements: Building a mobile OS with security at its core can offer robust protection against modern threats. Integration with Microsoft Defender and other security tools can provide a secure environment for both personal and enterprise use.
  3. Unified Management: Enterprises can benefit from a unified management system for all devices, simplifying IT administration and enhancing security policies across platforms.

Security Benefits

  1. Enhanced Security: By controlling the mobile OS environment, Microsoft can ensure higher security standards. Features like integrated Microsoft Defender, secure boot processes, and regular security updates can provide a secure platform for users.
  2. Enterprise Control: For enterprise users, a Microsoft-integrated mobile OS can offer advanced security features and management tools, allowing IT departments to enforce security policies uniformly across all devices.
  3. Data Protection: Seamless integration with Microsoft’s cloud services ensures that data is protected through encryption and secure access controls, whether it is stored locally on the device or in the cloud.

Conclusion

Rethinking and reintegrating mobile phones into Microsoft’s ecosystem is not just a strategic move, but a necessary one to provide a comprehensive, seamless user experience. By leveraging Android as a base and building upon it with Microsoft’s products and services, the potential for a cohesive and secure ecosystem is immense. Building on the success seen with Microsoft Edge, this approach could redefine mobile productivity and set new standards in the tech industry, making Microsoft an even more integral part of our digital lives.

Creating a Clean Python Development Environment using Docker and Visual Studio Code

Python

Python is a high-level, dynamically-typed programming language that has taken the software development industry by storm. It’s known for its simplicity, readability, and vast library ecosystem. Python has become the language of choice for many in web development, data science, artificial intelligence, scientific computing, and more. Its versatile nature makes it ideal for both beginners and experienced developers.

Docker

Docker is a revolutionary tool that allows developers to create, deploy, and run applications in containers. Containers can be thought of as lightweight, stand-alone packages that contain everything needed to run an application, including the code, runtime, libraries, and system tools. Docker ensures that an application runs consistently across different environments, eliminating the infamous “it works on my machine” problem. It simplifies the process of setting up, distributing, and scaling applications, making it an invaluable tool for modern development.

Visual Studio Code

Visual Studio Code (VS Code) is a powerful, open-source code editor developed by Microsoft. It provides a lightweight yet feature-rich environment that supports a multitude of programming languages, including Python. With a vast ecosystem of extensions, integrated Git support, debugging capabilities, and an intuitive interface, VS Code has quickly become the editor of choice for many developers around the world.

Why Combine Python, Docker, and Visual Studio Code?

You might be wondering why one would want to combine Python, Docker, and Visual Studio Code. The answer lies in the fusion of simplicity, consistency, and efficiency. By using Docker, you can ensure that your Python application runs the same way, irrespective of where it’s deployed. This means no more headaches about dependency issues or system incompatibilities. On the other hand, VS Code provides a seamless development experience, with features that play nicely with both Python and Docker. Combining these three tools gives you a streamlined, consistent, and efficient development workflow.

Steps to Set Up Your Dev Environment:

  1. Install Prerequisites:
    • Install Docker and ensure it’s running.
    • Download and install Visual Studio Code.
    • Install the ‘Python’ and ‘Docker’ extensions from the Visual Studio Code marketplace.
  2. Setup Docker:
    • Create a new directory for your project.
    • Inside this directory, create a file named Dockerfile.
    • In the Dockerfile, start with the following content:

    • Create a requirements.txt file in the same directory, listing any Python libraries your project depends on,following content:

      numpy
      pandas

      or you can specify the library version:

      tensorflow==2.3.1
      uvicorn==0.12.2
      fastapi==0.63.0

  3. Build the Docker Container Image:
    • In VS Code, open the folder containing your Dockerfile and other project files.
    • Use the Docker extension to build your Docker image by right-clicking the Dockerfile and selecting ‘Build Image’ or run the command
      docker build -t mypythonenv .



    • Run the container and mount your working directory or folder where you have your Python code into the container
      docker run -it --rm -v C:\Users\Sarmad\Projects\MyPythonProject:/usr/src/app mypythonenv



  4. Attach the running Docker Container
    • Attach the running Python container into Visual Studio Code to run and debug your Python Code, click on the Docker icon, then right-click on the running container (in our example called “mypythonenv”) then attach it to Visual Studio Code


    • We have now Visual Studio Code accessing the Python environment running inside the Docker container, the container has access to your Python code files that were mounted in the docker run command line


  5. Run the Python code
    • To run our “hello-world.py” code, click on the Run and Debug icon, then the blue “Run and Debog” button, select Python File.


    • The Python Code will be running inside your container


  6. Clean Up & Share:
    • Once done with development, you can push your Docker image to a registry (like Docker Hub) or your own private registry for sharing or deployment.

By following these steps, you’ll have a Python development environment that’s clean, consistent, and easy to use.

Happy coding!

Get DataBase and System information from SQL

Before we migrate or upgrade we need to know some critical information that helps us find out about how SQL/System will be licensed based on CPU/Socket/Cores, some other information related to collation to find out what is the best way to consolidate Databases.

so I made this scrip that helped to get the basic informations that i needed, I decided to share it with the community and hopefully will be helpful for you.

Read More »