Technologies

Other Technologies

The Aristotle project team and science use case researchers developed the following technologies and techniques:

  • Aristotle AWS Pricing Tool - developed a price performance analysis tool based on the original DrAFTS market prediction technology that helps users compare Aristotle resources to various AWS alternatives based on performance, cost, and price-performance. To do so, the tool runs the TOP500 LINPACK Benchmarks on all Aristotle instance types and all AWS instance types, and generates a report ranking them. Users can ask questions such as: Which AWS instance type is most equivalent to an instance type in Aristotle? Which is less costly? Or, if I want to spend 20% more and go 30% faster, which instance type should I use?
  • Automated Deployment Methods – implemented a Slurm HPC cluster in a cloud with OpenHPC 2 series based on CentOS/Rocky Linux 8.
  • Centaurus – created a cloud service for K-means clustering. Centaurus is currently being used by a former REU student Nevena Golubovic (who went on to complete her PhD) to correlate California power grid usage data with water usage data. Golubovic is with a start-up that received a grant from the California Energy Commission. Two Aristotle REUs to date have gone on to complete their PhDs and a third has been accepted into a PhD program.
  • Containerized Application Kernels – developed containerized Application Kernels (AK) and then compared AK performance on Google Cloud to performance on Aristotle Clouds, Comet, Bridges, Stampede2, Bridges-2, and Expanse.
  • Convertible Application Containers – developed application containers that are convertible from Docker to Singularity and verified their functionality on a variety of cloud platforms.
  • CSPOT - portable, multi-scale Functions-as-a-Service (FaaS) system for implementing IoT applications
  • Devices-as-Services - new "flipped" client-server model for IoT in which devices at the edge are servers that provide nanoservices, which applications in the clouds (the clients) compose for their implementations
  • Federated Open XDMoD with Cloud Metrics – added cloud metrics to Open XDMoD 9.0, including average cores reserved, average memory reserved, average root volume storage reserved, average wall hours per session, total core hours, number of active sessions, number of sessions ended, and number of sessions started. These metrics can be grouped or filtered by instance type, project resource, and VM size (core/memory).
  • hCOBRA – developed a modeling framework that extends the COBRA Toolbox for MATLAB to make large scale simulations more scalable and reliable so that scientists can better navigate the complexity of metabolic modeling and the recursive structure of simulations with priority effects.
  • K-means clustering analysis system - scalable system for executing and scoring K-mean cluster techniques; runs as a cloud service on Aristotle and Jetstream
  • Kubernetes Implementation Code for MPI Clusters – this code reflects the current state of experimental support for MPI applications managed by Terraform Kubernetes constructs that allow for automatic node count scaling and cloud portability. The software-based resource provisioning can best be attempted on AWS at the present time: it also works with Google Cloud Platform with MPI applications. Conversion to other cloud platforms should be possible with extensive changes to a platform-specific Terraform Kubernetes provider or other resource configurations. Familiarity with Kubernetes software concepts, resources provisioning on the desired cloud, and debugging parallel computing applications are recommended. The repository includes a “Getting Started with Kubernetes” tutorial as well.
  • Mandrake - software infrastructure for edge clouds (private clouds located at the network edge), designed to provide reliable, "lights out" unattended operation and application hosting in IoT deployments
  • Metabolic Model and Container – developed code to predict metabolic function of the gut microbiota of Drosophila melanogaster using v.3.0.4 of the OpenCOBRA Toolbox and v7.51.1 of the Gurobi Optimizer. An optional, containerized environment for running the code is available as well as a tutorial for performing the simulations.
  • Multicloud run method - uses Python, Celery distributed task queue, and other tools to run applications across multiple cloud sites
  • NanoLambda - developed a portable platform that brings Functions as a Service (FaaS), high-level language programming, and familiar cloud service APIs to non-Linux and microcontroller-based IoT devices. NanoLambda couples a new, minimal Python runtime system that is designed for the least capable end of the IoT device spectrum, with API compatibility for AWS Lambda and S3. It transfers functions between IoT devices (sensors, edge, cloud), providing power and latency savings while retaining the programmer productivity benefits of a high-level language and FaaS. A key feature of NanoLambda is a scheduler that intelligently places function executions across multi-scale IoT deployments according to resource availability and power constraints.
  • Radio Astronomy Container – developed a single container of radio astronomy software that combines the pipeline components developed for pulsar and other transient detections that can be deployed either on the cloud with Docker or on an XSEDE HPC resource with Singularity.
  • Semi Dynamic SteadyCom – developed a method that employs SteadyCom with discrete time steps.
  • Seneca - fast and low cost hyperparameter search method for machine learning models
  • Singularity and container images - new images that create reproducible research workflows which due to their portability can be shared broadly across institutions and disciplines, and run on any cloud (Aristotle, NSF cloud, or public clouds)
  • Sparta - in order to protect edge clouds from overheating, we devleoped a heat-budget-based scheduling system call Sparta which leverages dynamic voltage and frequency scaling to adaptively control CPU temperatue. Sparta takes machine learning applications, datasets, and a temperature threshold as input. It sets the initial frequency of the CPU based on historical data and then dynamically updates it, according to the applications' execution profile and ambient temperature, to safeguard edge devices.
  • STOIC (Serverless Teleoperable Hybrid Cloud) - developed an IoT application and offloading system that extends the serverless model in 3 ways: (1) STOIC adopts a dynamics feedback control mechanism to precisely predict latency and dispatch workloads uniformly across edge and cloud systems using a distributed serverless framework, (2) STOIC leverages hardware acceleration (e.g., GPU resources) for serverless function execution when available from the underlying cloud system, (3) STOIC can be configured in multiple ways to overcome deployment variability associate with public cloud use.
  • Telemetry Data Visualizer – REU student Kerem Celik created a tool to visualize telemetry data from the Citrus Under Protective Screening project and the Edible Campus farm at UCSB. The visualizer can be downloaded and run on a MAC or it can be run as a Docker-based software service. A paper may be forthcoming. UCSB is also considering using this tool for a big history study on climate change.
  • Temperature prediction methods - new methods for improving the accuracy of outdoor temperature prediction by IoT devices
  • WaterPaths Container – developed an on-demand MPI cluster with Docker container-based software deployment.
  • webGlobe - a cloud-based geospatial analysis framework for interacting with climate data
  • Wind turbine blade analysis framework - robust, flexible framework for generating an observationally constrained georeferenced assessment of precipitation-induced wind turbine blade erosion
  • WRF CONUS Benchmark Containers – implemented WRF 4.2.2 to run CONUS benchmarks on bare metal HPC in a Docker and a Singularity container.
  • WRF Docker Container – implemented a Docker container for WRF 3.8.1 with a Fitch patch.