Showing posts with label cloud. Show all posts
Showing posts with label cloud. Show all posts

Wednesday, April 28, 2021

Spin up your Online Store in Minutes with Bitnami Prestashop and AWS Lightsail

Regardless of whether you are using a cloud provider for the first time or are an experienced user, Amazon Lightsail is a proven choice for running your websites and applications. 

If you are planning to initiate your own e-commerce website, chances are that you will select Prestashop as the solution for launching your online store. We are delighted to announce that Amazon Lightsail just expanded its catalog and included the Prestashop Certified by Bitnami image. 

Follow this guide to get Prestashop set up and running on Amazon Lightsail with just a few clicks.

Now you can get your business running on the cloud within minutes! 

Bitnami Prestashop and Amazon Lightsail: The easiest and most reliable way to build your e-commerce store

Prestashop is unquestionably one of the most popular open-source e-commerce solutions among both developers and merchants. It provides more than 2000 themes and 3500 free and paid modules for you to customise your site. Bitnami has pre-packaged a blueprint image for Amazon Lightsail that bundles the most up to date and secure version of Prestashop and its components. 

It takes just a few clicks to have your site go live online. First, you need to log in to your Amazon Lightsail account and click “Create instance”. Then choose Linux/Unix as a platform and pick Prestashop from the list of available blueprints. Name your instance and click “Create Instance”. 



This will spin up a server with the Bitnami Prestashop image running on it. Getting the application password and the IP address of your Prestashop website is super easy thanks to the intuitive user interface, and you’re done!

To get the best out of Prestashop on your Amazon Lightsail server, check out this get started guide and the documentation for managing your Bitnami Prestashop image on Amazon Lightsail. 


Tuesday, March 30, 2021

How-To: Deploy Bitnami’s LAMP Appliance to VMware Environments Directly via VMware Marketplace!

If you are a developer, there is a high chance that you’ve utilized the LAMP stack in the past 24 hours. The LAMP stack, consisting of Linux, Apache, MySQL, and PHP/Perl/Python, is a web service tool used to quickly build websites and applications through a reliable coding framework. 

For developers who work with VMware environments, the easiest way to deploy the LAMP stack is through the Bitnami-packaged appliance that’s available on the VMware Marketplace – VMware’s one-stop-shop to discover, try and deploy open-source and third-party ecosystem solutions for all VMware products.  


Why use the Bitnami LAMP Appliance? 

LAMP is an extremely popular package of fully open-source services. So - what’s so special about the Bitnami-packaged version? And why use it via the VMware Marketplace, instead of just downloading and configuring it on your own? 

The Bitnami LAMP Appliance provides developers with standalone project directories where they can store their applications. Thus, they can start deploying their PHP applications without the need to set up the entire infrastructure.  This image bundles together the most up-to-date releases of PHP, Apache, and MySQL Linux, as well as phpMyAdmin, PHP modules, and Composer. Apache is configured with the 'event' Multi-Processing Module and PHP-FPM, which is an ideal configuration for high-traffic websites that serve a lot of simultaneous requests.  

Most importantly, the Bitnami LAMP solution is ready for production environments. That means that it includes SSL auto-configuration with Let’s Encrypt certificates and HTTPS/HTTP2 support, all of which allow developers to build more secure sites and applications as quickly as possible.  

The complete Bitnami catalog – including the LAMP appliance – is accessible from the VMware Marketplace, allowing users to easily deploy a wide range of highly secure, industry standardized, and open-source applications across all VMware environments. 

By deploying the Bitnami LAMP Appliance from the VMware Marketplace, developers only need to focus on building applications, without having to worry about governance, security, and so on.  It can be deployed to environments such as VMware Cloud on AWS, VMware Cloud Director, and VMware vSphere. Developers can save time, effort, and resources. 


So how does it really work? 

To consume LAMP directly from VMware Marketplace, you would just need to browse the catalog, find the LAMP solution, and hit ‘Subscribe’. From there, just input the deployment configuration parameters. Within seconds, your application will be running in the infrastructure of your choice -whether it is in a public, private or hybrid environment. 

To see the process step-by-step, watch the demo video below, where Bitnami engineer Michiel D’Hont subscribes to the LAMP appliance via the VMware Marketplace! 



Next steps 

Interested in using the VMware Marketplace? Sign up today on our catalog page – it’s free! For more information, refer to our Docs and learn more on our marketing page or blogs page.  

For more information on Bitnami, please refer to the Bitnami and Bitnami documentation websites. 

To get started with subscribing to the Bitnami LAMP, click here!  


 

Tuesday, January 21, 2020

Access and Manage Your Servers Remotely with the Bitnami Stack for Apache Guacamole

Want to access your computers from anywhere using just a Web browser? Look no further than Apache Guacamole, a "clientless remote desktop gateway" that supports standard protocols like VNC, SSH, and RDP and requires no plugins or client software.

Apache Guacamole allows users to access their computers from anywhere while also providing administrators with a way to configure, manage and control access to remote desktop connections. You can also combine it with a cloud-hosted desktop operating system to benefit from the flexibility and resilience of cloud computing.

Bitnami has released an up to date and secure image that you can use to launch Apache Guacamole locally or in the cloud. Choose the platform you want to run it on and immediately benefit from having your desktop reachable from any part of the world and from any device.

This blog post shows you how easy it is to deploy the Bitnami Stack for Apache Guacamole on the Microsoft Azure Cloud. It also walks you through the process of creating a remote connection with a Windows machine running on a Microsoft Azure server.
These instructions are for the Microsoft Azure Portal, but you can also run Apache Guacamole on an AWS instance, an Oracle server, and soon on a Google Cloud Platform server. You can also play with it on your local machine by downloading a virtual appliance.

Glyptodon Enterprise also available for Apache Guacamole


For those users and organizations that require enterprise-class scalability and management, Glyptodon Inc. offers a commercial solution powered by Apache Guacamole: Glyptodon Enterprise.

This package includes streamlined installation and maintenance, and timely security updates.
It also offers long-time support for major releases for at least five years and receives regularly scheduled updates. Updates to new releases ensure compatibility, facilitating administrators to keep their installations always up to date.

Glyptodon Enterprise is packaged in RPM repositories and compatible with any Red Hat Enterprise Linux or CentOS release.


Deploy the Bitnami Stack for Apache Guacamole 


Deploying Apache Guacamole from the Bitnami Launchpad for Microsoft Azure is easy; everything is included in the image that Bitnami provides for Apache Guacamole. Thus, the application will run on an Azure server without issues. This image uses the latest version of Apache and it includes SSL auto-configuration with Let's Encrypt certificates.

Let's take a quick look at the Bitnami Stack for Apache Guacamole default configuration.  There are three major components included in the image:

  • Apache Guacamole Server 
  • Apache Guacamole Client 
  • Database

Apache Guacamole Server
It is a daemon server (guacd) that talks to the remote desktops and accepts connections from the users logged in to the Web application.

Apache Guacamole Client
It is the frontend of Guacamole, implemented as a Java application that runs on top of Apache Tomcat.

Database
The user authentication for Apache Guacamole is configured to work with PostgreSQL.

Launch the Apache Guacamole image


To launch Apache Guacamole, follow these steps:

1. In the Apache Guacamole deployment offering page, click the “Single-Tier” button to display the deployment options for the cloud.




2. Select the cloud where you want to deploy the application. This post uses Microsoft Azure, but the deployment process is similar in other clouds.

Make sure that your Microsoft Azure and Bitnami accounts are connected. Check the Get Started with the Bitnami Launchpad for Microsoft Azure guide for more information on this.




You will be redirected to the Bitnami launchpad to create a new virtual machine on Azure.

3. Enter a name for your server, select the server size, and the region where you want to deploy the solution. As you can see in the image below, the image type is selected by default:



4. Confirm your selection by hitting the “Create” button at the end of the page. The Bitnami Launchpad will now begin spinning up the new server. The process usually takes a few minutes, and a status indicator on the page provides a progress update.

Access the client


Once the cloud server has been provisioned, the status indicator will show that it is “running”, and the Bitnami Launchpad page will display the server details, application credentials, IP address, and the SSH keys and command for connecting to the server remotely.

You can manage your application from the Bitnami Launchpad user interface or by accessing the Azure Console through the “Manage in the Azure Console” button.

To access the Apache Guacamole Client:

1. Click the “Go to the application” button.



2. Log in to the client by using the credentials provided in the “Application Info” section.

Use Apache Guacamole


To start managing users and connections, navigate to the user profile and select the “Settings” option from the drop-down menu.

Create a new connection


To enable a new remote connection, follow these instructions:

1. Navigate to the “Settings -> Connections” tab.  Click the “New Connection” button.



2. In the resulting form, enter a name to identify the connection, location, and protocol.

3. Select “ROOT” as location. Then select the protocol you want to use to connect to the machine.

In general, the protocol used for connecting with a Windows machine is RDP. In case you want to connect to a Linux server, then use the VNC protocol.

4. Fill the rest of the required values such as the connection limit, load balancing details, or the Guacamole proxy parameters.

5. In the “PARAMETERS -> Network” section, enter the public IP address of your machine in the “Hostname” field and the port. In the “Authentication” section, enter the username and password associated with your machine.



NOTE: Make sure that the server where the Windows machine is running. It should be publicly accessible to ensure that Apache Guacamole is able to connect remotely to it.

6. Click “Save” to create this new connection.

Create a user 


Once the connection is created, you need to create a user and associate the connection with it. 
1. Navigate to the “Settings -> Users” tab. You will see the admin user in the list of enabled users. To add a new user, click the “New users” button.

2. In the resulting form, enter the username, password, and personal info. Define the account restrictions and permissions and click “Save” to make the changes take effect.

3. In the “CONNECTIONS” section, you will find the connection you have created. Activate the checkbox to associate the user with that connection. Click “Save” to make the changes take effect.



Connect remotely to your machine


To start using the new connection, back to the “Home” page and click the RDP connection.
Apache Guacamole will connect you directly to your machine:


Learn more about how to use the Bitnami Stack for Apache Guacamole in the Bitnami documentation page or the Apache Guacamole official manual. Remember that if you need enterprise-class scalability and management, Glyptodon Enterprise is the best choice for you. Start working remotely!

Tuesday, November 27, 2018

Simplify Golden Image Management with Bitnami Stacksmith

If you are a cloud architect tasked with maintaining your organization's golden images - the base templates that are used enterprise-wide to create development environments or package applications - then you already know that this isn't an easy ask. Some of the issues you've probably already encountered are:

  • Pushy developers, who always want the shiniest new tech and are constantly asking for updated images with newer component versions; 
  • Platform compatibility issues, which require an image to work the same way across multiple cloud platforms and clusters; 
  • Ongoing security updates and bug fixes, which need to be immediately incorporated into existing images; 
  • Enterprise audit requirements, which require you to have (and maintain) a detailed list of everything that goes into each golden image, including version numbers, license information and other metadata. 

If your enterprise holds a large catalog of golden images, each with hundreds of components, managing the tasks (and the image sprawl) described above can quickly overwhelm you (and your team).

At Bitnami, we have invested time and effort to develop a production-grade tool that addresses all of these common problems. This tool, which we call Stacksmith, makes it easy for enterprise IT teams to create, deploy, and maintain a library of golden images for cloud and container platforms. Stacksmith generates cloud images and deployment templates that can be natively deployed on major cloud vendors and container services, and also provides the tools to monitor, inspect and rebuild those images.

Some of the key benefits of Stacksmith are:

  • Comprehensive, auditable tracking of image components in a versioned manifest 
  • Configuration reuse while simultaneously optimizing for specific cloud/container platforms 
  • Automated and continuous security monitoring 
  • Fully pluggable into existing systems and workflows 
  • Better enforcement of DevSecOps requirements and policies 

Does the list above sound like it would make your day job easier? If so, find out more by downloading our white paper on golden image management with Stacksmith. Or jump right in and give Stacksmith a test drive with a free 30-day trial of Stacksmith Team.

Wednesday, September 12, 2018

Deprecation of AWS Paravirtual Images

Nine years ago Bitnami started providing cloud images for open source software. The first cloud supported by Bitnami was AWS.

A lot has changed since then, including the virtualisation technology used to provide AMIs. AWS started using a technology called paravirtual (PV) images, but later moved to a  new virtualization type, hardware virtual machine (HVM).

Today, HVM is the recommended solution for virtualization due to its superior performance. Also, after recent security vulnerabilities in the Linux Kernel, AWS has published a security bulletin recommending users to migrate to HVM instances due to insufficient system protection to address process-to-process risks in PV images.

For these reasons and because we are committed to provide optimized and secure images, Bitnami has decided to stop providing updates for PV images after October 1st. Existing images will continue to be available, however they will no longer be updated with security fixes.

Affected Servers


Deprecating PV images does not impact running servers, however we strongly recommend you  move your applications to an instance based on a HVM image.

In order to check if you are running a paravirtual server, you can look for the “Virtualization” property in the EC2 console description tab for your instance. In the official AWS documentation you can find the recommended upgrade path when moving from PV to HVM machines.

Affected Accounts and Next Steps


New regions and current instance types only support HVM. Please note that some of instance types can only be launched in a VPC.

If your EC2 account was created before 2013-12-04, you might need to configure your account to support VPC. If that is the case, we advise that you:

  • Create a new AWS account. This will create an EC2-VPC account which comes with a default VPC in each region.
  • Select a region that you haven’t used before to launch your servers. When using a new region a VPC will be created by default.
  • Contact AWS support to migrate your EC2-Classic account to EC2-VPC. This may require removing old resources in your account.

You can find more information about enabling VPC support in your account and how to migrate a linux instance from EC2-Classic to a VPC in the official AWS documentation.
Questions

For questions relating to your account, we recommend to contact AWS directly.Bitnami does not have access to your EC2 account details.

For any questions about PV support, please don’t hesitate to contact us in our community forum.

Tuesday, September 11, 2018

Integrating Stacksmith with Atlassian Bamboo, Bitbucket and Jira

By Wojciech Kocjan, Solution Artichect at Bitnami

Bitnami Stacksmith is a tool that automates and optimizes the packaging of your applications for deployment to cloud and container platforms. It also continuously monitors your applications for updates and patches, and maintains them so they remain up to date and secure.

At Bitnami, we understand that enterprise companies rarely conduct release management manually - it is a complex process used to coordinate developers, components, and steps that include application development, code check in, compiling and testing, versioning and releasing, and deploying. Typically, this process is automated using one or more Continuous Integration (CI) or Continuous Deployment (CD) tools to streamline the process.

Stacksmith performs a few critical pieces of the release management process. In certain use cases, like when trying to repackage your traditional applications so you can replatform them from your data center to the cloud, using Stacksmith as a ‘stand-alone’ solution can be the right approach. With that being said, Stacksmith provides significant value in use cases that involve ongoing application development, deployment and maintenance, which is why we felt it was  critical that Stacksmith be able to fit easily into any existing CI / CD or orchestration tooling you already utilize.

This article goes through the process of creating a fully automated setup where Bitnami Stacksmith is part of the continuous integration (CI) cycle and generates up-to-date cloud deployable assets. Here, I’ll use a Java web application that uses a database, but the process is applicable for different setups and use of the various stack templates that Stacksmith provides.

My setup will consist of the following components:


The following image shows how the process of application development, compilation and packaging will look like:


Why introduce Stacksmith to the CI / CD pipeline?


Atlassian Bamboo is a powerful continuous integration server that lets you automate release management in your CI / CD pipeline. It provides the basic tooling to define projects and how they’re built, but it’s up to its users to define the exact stages and tasks that should be run.

In many cases, when companies set up Bamboo, they focus on the development part of the lifecycle - compiling the application, then running unit tests to ensure it is working properly. Bamboo comes with multiple tasks for compiling different types of applications, running tests and understanding its results. In these instances, the continuous deployment part is given less focus - in many cases, it simply involves copying the built artifacts to a staging and/or production server, and restarting it.

Bitnami Stacksmith can improve the CD part of the pipeline by enabling packaging of the application and creating deployable assets for various public clouds as well as Kubernetes. It creates outputs specific to the target platform - such as AMIs and CloudFormation templates for AWS, VM images and ARM templates for Azure, and Docker images and helm charts for Kubernetes. Stacksmith outputs are also optimized for each platform - leveraging load balancers, databases and other services available to those specific public clouds.

Stacksmith also lets you include your IT and security policies in the process - including image OS and configuration hardening, network policies, agents and any other policies, and applies them as it builds the deployable assets. This ensures that every image is built according to Security’s specification. In addition, Stacksmith can analyze whether those have changed, and either notify you or automatically repackage your application with the latest policies.

Stacksmith packages the application and provides environment-specific deployment mechanisms that launch the application along with all of its dependencies - setting up networking, databases, load balancers and other resources. The deployments can be used to automatically or manually launch the application. It can also be used to manage existing deployments - providing a smooth way to upgrade as well as downgrade the application in development, QA or production environments.

By having assets that allow new deployments of the application to be easily launched, it’s possible to perform full deployment of the application in the cloud, run automated tests of the application and its entire environment for each build, and validate the exact setup it will be run in.

The following illustrates how performing continuous packaging fits into the CI / CD pipeline:


Bitnami Stacksmith provides additional value by constantly monitoring the components that went into the deployable assets. It monitors trusted sites for new releases and security updates for your application's components, provides alerts when available, and offers a simple repackaging process.

Integrating Jira, Bitbucket, Bamboo and Stacksmith


First, I’ll set up Atlassian Jira, Bitbucket, Bamboo and Bitnami Stacksmith and show how the suite of tools work together to provide a CI / CD pipeline that includes the creation of deployable assets. Next I will show how this pipeline performs end to end testing by launching the application in the cloud and verifying that it is working correctly. Finally I’ll update the deployment provided the application or deployable asset tests were successful.

The components and setup


The first piece of software is Atlassian Jira. It’s an issue and project tracker that is used to track projects - specifically to track the progress of application development as well as all the issues related to it.

Second application is Atlassian Bitbucket - it’s used for storing the source code of the application. It’s configured to be linked to the Bitbucket and Jira servers, following the process in Atlassian documentation.

Third element is Atlassian Bamboo - it is the continuous integration (CI) server that allows you to define build plans and run them to create cloud deployable assets from the application’s source code. In order to have all the data available across all Atlassian apps, Bamboo is linked with Jira as well as Bitbucket. And since Stacksmith will be used to do the actual packaging and creation of the deployable assets, Bamboo will also be linked to Stacksmith.

The final and centrally important element is Bitnami Stacksmith - the tool that will actually create the assets that will be deployed to the cloud. All that is needed to set up Stacksmith is to sign in (if you don’t already have a subscription you can sign up for a free trial), and go through a very easy process of onboarding your AWS or Azure account.

Defining the CI pipeline - from source code to deployable assets


Now that all of the applications are set up, I can configure them so that the process of going from source code of the application to deployable assets that I can run in the cloud is fully automated.

Building application from source code with Atlassian Bamboo


I’ll start with Atlassian Bamboo. I’ve created a very basic build plan that packages the application - simply checking out the source code from Bitbucket server and using Maven to build the application from source code.




Bamboo will now monitor the source code in the Atlassian Bitbucket server, and will perform a new build every time the application source code changes. Thanks to the links that have been set up in this integration, all of this data is also accessible from Jira.

Creating deployable assets using Bitnami Stacksmith


Next I need to set up Stacksmith, so it can be used to build the deployable assets. The first step is to set up the application definition in Stacksmith - this involves selecting the specific stack template you will use. The stack template informs Stacksmith about the type of application you are packaging and its requirements. In this case, I will select the  "Java Tomcat Application with DB (MySQL)" stack template, which informs Stacksmith to handle Tomcat set up and create the database, leveraging cloud-native resources if possible (such as RDS on AWS or Azure Database for MySQL on Azure). This step only has to be done once, and can be done via Stacksmith Web UI, its APIs, or the Stacksmith CLI.

The next step is to integrate Atlassian Bamboo with Stacksmith. This can be done with the stacksmith-cli tool that provides a convenient way to trigger rebuilds of an application in Stacksmith, retrieve the build log in real time, and get results back. All of the Stacksmith APIs, as well as the CLI tool, are available to all Stacksmith users - including those using the free trial.

The stacksmith-cli is used inside a CI tool such as Atlassian Bamboo. To configure integration with Stacksmith, a simple YAML file with configuration for the CLI tool is created - Stackerfile.yml. Here is a sample contents of the file:

appId: bitnami/apps/7cbf4630-8c6d-0136-d7fb-66a676282717
appVersion: 1.0.0
files:
  userUploads:
   - target/myapp-1.0.0.war
  userScripts:
    boot: boot.sh

The appId specifies the application identifier that’s provided by Stacksmith. The appVersion field specifies the version to pass to Stacksmith. The files section specifies the files to upload - the application file(s) as well as the scripts that Stacksmith should use.

The target/myapp-1.0.0.war file was built by Maven (as described earlier in this article) in Bamboo. The boot.sh file is a boot script and includes application-specific logic and specifies how to provide the application, with details required for connecting to database.

The Stackerfile.yml can either be written manually or generated programmatically - such as using the Maven filtering functionality to create the file with appropriate placeholders.

The last thing that is needed is to create a helper script called build-with-stacksmith.sh that calls the *stacksmith-cli* to build the application. Here’s the contents of the script:

#!/bin/bash

set -eu -o pipefail

stacksmith auth login --access-token "${STACKSMITH_ACCESS_TOKEN}"
stacksmith build

The Stackerfile.yml, boot.sh and build-with-stacksmith.sh files are committed to the repository with the source code of the application, which makes it possible to call them directly from Bamboo.

Setting up Bamboo to trigger a build in Stacksmith is easy. It requires setting up the stacksmith-cli binary on the Bamboo server. Then all that is needed is to create an additional step in the build plan - running the build-with-stacksmith.sh script from the source code repository.




The STACKSMITH_ACCESS_TOKEN environment variable is passed to the script and contains an API token that can be retrieved using the stacksmith-cli.

This new step can be tested by manually running a new build, which calls the CLI tool and  triggers a new build in Stacksmith. The link and live information about the build is also shown in Bamboo at build time, and is accessible after the build finishes.



Bamboo is already monitoring the source code repository in Bitbucket for changes - any new change to in the application code will now trigger a rebuild, performing a full automation from source code to deployable assets.

Continuous deployment to the cloud


At this point our application is packaged and Stacksmith has created the charts or templates that will be used to launch the new deployment or update an existing deployment - Helm charts for Kubernetes, CloudFormation templates for AWS and ARM templates for Azure. This enables you to create a test deployment and run additional tests against the application as well as upgrade the application version in the development, QA or production environments. The Stacksmith API or the stacksmith-cli tool can be used to retrieve the outputs from the packaging process.

Having an simple way to deploy the application makes it easier to run tests of the entire cloud deployment. All that’s needed is a relatively small script that can be run from Atlassian Bamboo - to simply retrieve the template URL, launch a test deployment and run tests against the application.

Below is a sample shell script that runs tests using AWS. Deploying to Azure or using helm charts is very similar, except they use different client tools.

#!/bin/bash

# unique name for the deployment
stack_name=teststack-$(date +%Y%m%d%H%M%S)-$$

# template URL is retrieved from Stacksmith and passed as argument
template_url=$1

# launch the template in AWS or fail immediately
aws cloudformation create-stack --stack-name "${stack_name}" \
  --template-url "${template_url}" || exit 1

# (wait for cloudformation stack to be launched)

# retrieve URL of the application from AWS
url=$(aws cloudformation describe-stacks --stack-name "${stack_name}" | jq -r \
  '.Stacks[0].Outputs[] | select(.OutputKey == ("PublicDnsName")) | .OutputValue')

# run test and store whether it succeeded
./application-tests.sh "${url}"
code=$?

# delete the deployment
aws cloudformation delete-stack --stack-name "${stack_name}"

exit ${code}

The application-tests.sh script should perform application specific tests, getting the URL that the application is available. It should indicate if the tests have failed using non-zero exit code. The tests can include anything - such as interacting with the application via APIs and/or web automation tests.

Similar to the build-with-stacksmith.sh file, this script can be committed to the application source code repository and set up to be run as a task in Bamboo.

Deploying an updated version of the application in various environments is very similar - the only thing that is needed is to update the existing deployment - using aws cloudformation update-stack command instead of create-stack. For example:

#!/bin/bash

set -eu -o pipefail

# unique name for the deployment
stack_name=teststack-$(date +%Y%m%d%H%M%S)-$$

# template URL is retrieved from Stacksmith and passed as argument
template_url=$1

# launch the template in AWS or fail immediately
aws cloudformation update-stack --stack-name "deployment-staging" \
  --template-url "${template_url}"

# (wait for cloudformation stack to be updated)

The script above will update the existing deployment in AWS and wait to ensure it has succeeded. It should also be committed to the application source code repository and invoked as a task from Bamboo. It should be invoked after the tests of the deployment have passed.

The packaged application can also be manually launched from the Stacksmith Web UI. Launching the application into any cloud simply requires choosing the right build and clicking the Launch button:


This starts the process of deploying the packaged application, redirecting the user to the cloud provider’s console and finishing the process - for AWS this is done using the AWS CloudFormation Console. For Azure, this starts the process of deploying an ARM template in the Azure Portal.


Monitoring application for security issues and updates


Stacksmith provides functionality to ease the maintenance of your applications. It continuously monitors the components that make up the deployable assets (such as system packages), checking for updates and known security vulnerabilities. It alerts the user about security issues as well as whether updates are available that resolve the security issues. It also provides convenient manual options or fully-automated and continuous ways to repackage the application.

This ensures a smooth process for upgrading your application, making it easy to repackage a fixed version of the application and update existing deployments to fix any security issues.

Summary


As this article has demonstrated, it is easy to utilize our set of APIs and CLIs to integrate Bitnami Stacksmith with Jira, Bitbucket and Bamboo. This combination enables you to set up a fully automated pipeline that starts with source code and ends with the creation of assets you can deploy to the cloud.

The Stacksmith APIs and CLIs can also be used to integrate with other popular CI / CD tools. It is easy to apply the same steps we discussed here to an environment that utilizes Jenkins, GitLab and many other solutions.

Package, deploy, and maintain your own application today. Sign up for the Stacksmith 30 day free trial and find out how easy it is!

Get started with Stacksmith now!

Have questions about how Stacksmith can fit with the CI / CD tools you currently use?  Or want a personalized demo?  Contact us at enterprise@bitnami.com.




Thursday, September 6, 2018

The Rick and Martin Show - How to get a MEAN application running on the cloud in minutes

Bitnami’s VP of Engineering Rick Spencer and Martin Albesetti, Director of Engineering for Stacksmith, introduce the basics of Stacksmith in this first episode of a new series of vlogs - The Rick and Martin Show. In this episode, Martin demonstrates how to get a MEAN application running on the cloud in minutes and highlights a few of the features to maintain that application. They introduce configurations scripts, build targets, and other essential concepts as well.



Thursday, June 28, 2018

Maintaining your image-based application deployments


By Martin Albisetti, Senior Architect

The model shift to image-based deployments

In a previous post I covered why, once you decide to move to the cloud, you will likely also want to switch to image-based deployments (also called “immutable infrastructure”). This is because the way IaaS and PaaS infrastructure is designed and maintained is different than the way you managed servers in your data center. If you don’t make this switch, you will encounter a host of challenges that will likely result in poor serviceability and, ultimately, frustration on your part.

For example, clouds give you elasticity so you can use resources on-demand and scale up and down as needed over time or for small spikes in usage throughout the day. But to gain this, you must give up relying on the stability of each individual server or instance. In the cloud, you will find yourself spinning up new servers a lot more often than you did in the data center (some auto-scaling scenarios might be doing this several times an hour), so time-to-serve is crucial. And if you intend to do server configuration after they’ve booted, you can expect many minutes of wait time – perhaps even in the 30-minute range – before you can start serving requests. There’s also the risk factor that moving bits around on a system and downloading from different sources on the internet will tend to be flaky, and these processes might fail more often than you’d like them to and need to be restarted from scratch.

What we built into Bitnami Stacksmith to help you with the shift

In order to be able to do image-based deployments, building an image needs to be easy, repeatable and fast. This is because now (in the image-based deployment world), incorporating an update means creating an entirely new image, not, as was the case in your data center, deploying only the component that changed.

Stacksmith is one of the easiest ways to create a deployable image. Stacksmith will not only take care of all the underlying work needed to build an image, but it also produces a target-specific deployment template – a Helm Chart, AWS CloudFormation Template, or Azure Resource Manager (ARM) Template - so the output is immediately deployable.

Whether using the UI or the API, Stacksmith carefully narrows the inputs it takes, so at the end of the process you not only get a built image in your own cloud account, but, to the extent possible, a guarantee that the process will be reproducible tomorrow, or in 6 months.

Stacksmith also makes it incredibly easy to rebuild your image to incorporate updates - it is exactly one button click away.

The other crucial area where Stacksmith provides real value here is with its approach to monitoring for updates. We automate this process as well, so you don’t have to spend your time monitoring trusted sites manually.

Stacksmith approach to maintenance - focus on updates, enhance with CVEs

When a new security issue hits the press, the conversation typically hinges around a CVE number and, if it’s disruptive enough, a codename. It’s important to understand the issue that was reported, as it’s common for issues to only affect specific scenarios that may not be yours.

However, when we built Stacksmith, we instead focused on updates first. It’s rare for you to be able to mitigate issues without updating the software involved, so when designing and building the tool we made sure we provided a really solid way of tracking system updates as well as making them trivial to apply.

There’s also the occasional update that fixes an issue that’s not strictly security-related, but due to unforeseen circumstances requires certain packages to be updated in order to keep functioning properly. Kernel updates to run well on top of fast-changing cloud providers and tls certificates come to mind.

Then, on top of updates, we’ll layer CVEs to let you understand how important it is to apply the updates, what your risk exposure is, and be able to track and mitigate security issues across hundreds of applications from one place.

System package updates and language runtime dependency updates

Traditionally (and generally), dependencies in the Linux world were sourced from the distribution’s repositories. Sourcing your updates from the system package like this meant you didn’t have to worry about whether all the software components fit and worked together, and gave you assurance that licensing and security concerns were taken care of by deep experts in those communities.

But as the pace of change in open source software started to accelerate, with more contributors, changes, and new projects created daily, the Linux distributions could no longer keep up with certain parts of the ecosystem.

Applications started using more and more dependencies from their language runtime repositories, and more importantly, those are the dependencies developers tend to care about and update more frequently. And these dependencies are easier to track, as they are part of the normal development workflow.

So when we designed Stacksmith, we decided to treat these two types of updates separately, and focus first on the system package updates, as these are less accessible to developers and typically the ones that get neglected.

How Stacksmith handles updating

For every image that gets built, Stacksmith extracts the package manager’s state at the very last step of the build process, and captures all the system dependencies that were installed - with their exact version numbers as well as which repositories were used to install them. This information forms what we refer to as the components manifest - a comprehensive and detailed listing of all of the components of your application package (this manifest is also an incredibly valuable tool for auditing, but that is a topic for another post).

From then on, Stacksmith tracks the relevant repositories, compares them to the components manifest, and records whenever an update is available. Stacksmith then alerts you to the availability of these updates through the product UI, and, optionally, notifies you or automatically kicks off a new image-build process that incorporates the updated packages.

Stacksmith UI showing a list of packages that have updates available

It is important to note that I am talking about tracking both the default dependencies that are required for your application to run on the target platform, as well as dependencies that get brought in later. For example, to offer greater flexibility during the image creation process, Stacksmith lets you define actions and dependencies via the use of scripts, which get invoked at the moment the package is being built, booted, or even run for the first time. To Stacksmith, these are all key to the goal of creating a final, deployable image, so the repositories that get added at this phase are also tracked. Stacksmith goes to great lengths to ensure we are tracking everything that’s important to keep each image updated.

Throughout all of this, there’s very little you need to learn or understand about how images are built. As a user, you deal with the end result, which is a familiar bootable image that will have your software installed and be ready to work.

As you can see, one of our goals at Bitnami is to keep lowering the barrier of adoption of cloud-native approaches to software development and deployment. Learn more at bitnami.com/stacksmith. And expect to see additional features get added to Stacksmith that lower the barrier to adopting Kubernetes and public clouds.

Monday, August 21, 2017

Container Trends – Bitnami User Survey 2017 (Part 2)

Survey Says: Kubernetes

Name the top 5 container orchestration solutions.

Top answer on the board: Kubernetes


The rise of Kubernetes as the leading container orchestration tool should come as no surprise. It’s the topic of the day with Amazon and Microsoft recently joining the CNCF. In a few short years, we’ve seen hundreds of companies join the ecosystem building or modifying solutions to support kubernetes (check out the recently updated CNCF Landscape) and we’ve even seen the early days of acquisitions beginning to happen. With all that is being said and written, it’s still good to back up a trend with some good old-fashioned data.

As promised (see Container Trend Part 1), in our second post covering our recent user survey we’re taking a look at container orchestration trends. In our last blog post, we showed the increase in interest, highlighting a more than 2x increase in production container usage from 2016 to 2017. As that increase in container usage was happening, what impact did that have on how containers were being managed? Of course, we’d expect some increase in usage of container orchestration to match that growth.

We asked our users “What Container Orchestration System(s) does your company use?” and the results were surprising in a few ways. First and foremost was the enormous growth of Kubernetes. And while Mesos usage doubled, it still pales in comparison to new entrant offerings like AWS Elastic Container Service and Azure Container Service. Docker Swarm showed significant growth over that period as well, perhaps due to Swarm being included in the Docker 1.12 release. The least surprising bit of data was the sharp decline in users with no container orchestration, which is supportive of the shift from dev/test to production.

Figure 1. Container Orchestration Adoption - 2016 vs 2017


Key Stats:
  • 115% growth in businesses using Kubernetes
  • 100% growth in Mesos
  • AWS Container Service overtaking Docker Swarm in less than 1 year
Digging deeper into container orchestration, we wanted to understand the scenarios in which the various platforms are being used. Knowing there was such a huge shift to production environments in the past year and seeing the impact that had on orchestration adoption above, we wanted to understand if there was preference for one platform over another as users make that move. For the most part, platform selection for dev/test is aligned with production. Focusing specifically on 2017 in this data set, we can see that Kubernetes, AWS and Azure usage all increased a few percentage points over their general adoption numbers when users were focused on production usage, with the largest number of users selecting Kubernetes.

Figure 2 – Container Orchestration Adoption 2017 – Dev/Test vs. Production



Key stats:
  • Kubernetes is platform choice for over 50% of existing production container deployments
If you are making a decision on where to invest as you build out a container strategy or you’re looking for tools that can help you manage your Kubernetes environment, you’ve come to the right place. Bitnami can get you started on your journey with pre-packaged container images from our vast catalog of ready-to-run applications and we’re actively developing a contributing to a number of leading edge kubernetes projects.

Stay tuned for more from our 2017 Bitnami user survey. Next time we’ll break down container orchestration a little further and look at mixed usage …we’re just getting started.



Wednesday, May 24, 2017

Introducing ksonnet, an Open Source configuration experience for Kubernetes


We are pleased to announce ksonnet today, an open source tool for configuring applications running on Kubernetes clusters that we have built in collaboration with our friends from Box, Microsoft and Heptio.

Bitnami's mission is to make awesome software available to everyone. We originally started providing easy to use native installers for popular open source server software. We've quickly expanded into providing virtual machines, cloud images and, more recently, containers.

Kubernetes has emerged as the leader in deploying production container workloads. Though Kubernetes can be thought of as an orchestration system, it has turned into a full-fledged platform that others can build on. A large ecosystem of contributors has emerged, providing tooling around monitoring, security, management and any other aspect of building and maintaining Kubernetes clusters. In particular, Bitnami has been involved with the Helm package manager and related projects such as Monocular and Kubeless, the Kubernetes-native serverless framework.

Internally, we have been early adopters of Kubernetes ourselves. In the process of migrating all of our infrastructure to Kubernetes, we ran into scenarios that pushed the limits of what current solutions could deal with. As a result, we have ended up creating our own tooling to help define and manage complex Kubernetes deployments. Around the same time, Heptio was working on a similar project and approached us to combine efforts, resulting in ksonnet.

ksonnet is an open source configuration tool for configuring applications in Kubernetes based on the jsonnet templating library. It is designed to be easy to use, yet extensible and powerful enough so it can cover as many scenarios as possible.

Our goal is that ksonnet will help lower the barrier of adoption for Kubernetes and will continue to evolve and integrate with the rest of the Kubernetes ecosystem. Though it has just been released, it is already being worked on by an active group of contributors that includes Red Hat, CoreOS, Box and Microsoft. We are particularly excited about the integration with the Helm project, allowing the generation of Helm charts that support ksonnet as an alternative to existing templates.

Heptio and us are excited to share ksonnet with the community, helping push Kubernetes further into the mainstream. Give it a try today and let us know what you think!

Thursday, October 20, 2016

Dirty COW (CVE-2016-5195): Privilege escalation vulnerability in the Linux Kernel

[2016-10-26]

All the affected cloud images and virtual machines have been successfully patched.

If you are using a Bitnami Cloud Hosting instance, you can easily patch it by following the guide below while we upgrade the base images.

[2016-10-24]

The Bitnami Team is happy to announce that our images on Google, Azure, AWS Marketplace and regular images have been properly updated. Additionally, we will continue to work on releasing the images for our all of our cloud platform partners and virtual machines.

----

A new security vulnerability in the linux kernel has been discovered. You can find out more information about it in the following research report.

A race condition was found in the way the Linux kernel's memory subsystem handled the copy-on-write (COW) breakage of private read-only memory mappings. An unprivileged local user could use this flaw to gain write access to otherwise read-only memory mappings and thus increase their privileges on the system.

This could be abused by an attacker to modify existing setuid files with instructions to elevate privileges.

We believe it is of the utmost importance to quickly address any security issues in applications distributed by Bitnami and our team is working to update all of the affected Virtual Machines and Cloud Images available through Bitnami for all Cloud Providers.

Once the new kernel is available, you can update it by running the following commands (you must run the command specific to your distribution):

  • Ubuntu / Debian
sudo apt-get update && sudo apt-get dist-upgrade 

You will have the fixed version of the kernel after rebooting your server.

  • Oracle Linux, Red Hat, CentOS and Amazon Linux
sudo yum update 

You will have the fixed version of the kernel after rebooting your server.

If you have any questions about this process, please post to our community support forum and we will be happy to help! 

Friday, August 12, 2016

Security Notification: Off-Path TCP Linux Kernel Vulnerability (CVE-2016-5696)

[UPDATE: 2016-08-22]

BCH images have been updated properly. You can now launch new servers that mitigate the vulnerability.

[UPDATE: 2016-08-18]

All the affected cloud images and virtual machines have been successfully patched.

If you are using a Bitnami Cloud Hosting instance, you can easily patch it following the guide below while we upgrade the base images. 

[UPDATE: 2016-08-17]

The Bitnami Team is happy to announce that the images of Google, Azure, 1&1 and GoDaddy have been updated properly. Additionally, we continue working on releasing the images for our all of our cloud platform partners, virtual machines and the native installers.

----

A new security vulnerability in the linux kernel has been discovered. You can find out more information about it in the following research report: "Off-Path TCP Exploits: Global Rate Limit Considered Dangerous".

Since the Linux kernel code affected was implemented in 2012 (in Linux Kernel 3.6), all Bitnami-packaged images might be affected by this issue if the kernel hasn't been updated. At the time of writing this post, a new patched kernel has NOT been released for Debian and Ubuntu distributions that are the base OS for most of the Bitnami Virtual Machines. We will keep you updated in this blog post.

We believe it is of the utmost importance to quickly address any security issues in applications distributed by Bitnami and our team is working to update all of the affected Virtual Machines and Cloud Images available through Bitnami for all Cloud Providers. 

In the meantime, you can mitigate this problem by applying the following patch in your system:
sysctl net.ipv4.tcp_challenge_ack_limit=1073741823; grep -q tcp_challenge_ack_limit /etc/sysctl.conf || echo "net.ipv4.tcp_challenge_ack_limit=1073741823" >> /etc/sysctl.conf
Please, note that this is just a temporary solution that makes it a lot harder for attackers to succeed in exploiting this vulnerability. You can find more information about this temporary fix in a writeup on the Akamai blog.

Once the new kernel is available, you can update it by running the following commands (you must run the command specific to your distribution):


  • Ubuntu 
sudo apt-get update && sudo apt-get dist-upgrade 
You will have the fixed version of the kernel after rebooting your server.

  • Debian 
sudo apt-get update && sudo apt-get dist-upgrade 
You will have the fixed version of the kernel after rebooting your server.

  • Oracle Linux 
sudo yum update
sudo yum upgrade
You will have the fixed version of the kernel after rebooting your server.


  • Amazon Linux & RedHat Linux
sudo yum clean all
sudo yum update kernel
You will have the fixed version of the kernel after rebooting your server. 


If you have any questions about this process, please post to our community support forum and we will be happy to help! 

Tuesday, May 10, 2016

WordPress Stack with PHP7

WordPress announced a few months ago that it is fully compatible with the latest version of the PHP framework, PHP7. Nowadays most of the popular plugins are already compatible and WordPress has also published a developer guide about how to update WordPress plugins to support PHP7.

Here, at Bitnami, we baked a new WordPress stack based on PHP7 to help you run the latest, shiniest and fastest software. WordPress + PHP7 is faster than ever before.

But that's not all. If you still want to run WordPress on PHP 5.6, now you can. Use the Bitnami LAMP Stack and install the WordPress module on it, or use the WordPress Legacy Stack. The WordPress Legacy Stack will have the same and latest version of WordPress but will ship with PHP 5.6

Both new WordPress versions are available as installers, virtual machines, and cloud images on the Bitnami WordPress Stack page.

If you have questions about Bitnami WordPress or the advantages of using PHP7 over PHP5.6, please post to our community forum, and we will be happy to help you.

Tuesday, May 3, 2016

Security notification: OpenSSL 1.0.2h / 1.0.1t

A new security vulnerability was recently discovered in certain versions of OpenSSL. More information about the vulnerability is available on the OpenSSL website: https://www.openssl.org/news/secadv/20160503.txt

There are two high security issues that do not affect Bitnami installations:

1. Memory corruption in the ASN.1 encoder (CVE-2016-2108).

  • All of the currently released Bitnami stacks use an OpenSSL version greater than the affected versions: 1.0.2c or 1.0.1o.

2. Padding oracle in AES-NI CBC MAC check (CVE-2016-2107). 

  • The OpenSSL we ship with the Bitnami installers, virtual machines and cloud images does not enable AES-NI encryption.

The Bitnami team will continue working on updating OpenSSL to 1.0.2h for all Bitnami apps, however, to be clear, the two security issues above do not affect our applications that are currently available.

Friday, April 29, 2016

Open edX "Dogwood" Is Now Available from Bitnami!


We're happy to announce a new version of the Bitnami Open edX stack!

Open edX is the open-source online learning platform originally conceived by edX, a nonprofit online learning destination founded by Massachusetts Institute of Technology and Harvard University that offers courses from the world’s best universities and institutions. The Open edX platform provides development tools to create, teach, and manage courses, student experiences, and learning outcomes at Internet scale.

Some of the new features in this new version are:
  • Partial credit
  • Open edX Analytics Developer Stack
  • Initial Version of Comprehensive Theming
  • Additional File Types for Open Response Assessments
  • Timed Exams
  • LTI XBlock
  • Otto Ecommerce Service
Several features are deprecated as of the Open edX Dogwood release:
  • Original ORA ("ORA1") Problems
  • Legacy Instructor Dashboard
  • Studio Checklist page
  • Certain XModules and Tools, including the graphical_slider_tool and the FoldIt protein simulator
  • The psychometrics and licenses Django apps
With Bitnami, developers can deploy a ready-to-run Open edX Stack with just one click. To get started, choose from our all-in-one free native installers, virtual machines or cloud images.

If you have questions about Bitnami Open edX Stack, please post to our community forum and we will be happy to help.

Wednesday, March 23, 2016

Join Bitnami at GCP Next 2016!

GCP NEXT 2016 begins today, and we are excited to announce that we are a proud partner. We will be demoing the latest additions to Stacksmith, which focus on combining easier, up-to-date container creation with integrations to CI systems (such as Jenkins) and container orchestration systems (like Kubernetes).

Stop by our booth (#8), and say hi to our engineers that are behind the project.




More information about GCP NEXT:

At GCP NEXT, you’ll have the opportunity to attend three visionary keynotes presented by GCP with industry leaders, 30 in-depth technical sessions, participate in self paced code labs, and hear how other IT leaders rely on GCP for mission critical cloud solutions.

The 2-day conference includes sessions designed to help you build on your cloud strategy:
  • From idea to market in less than 6 months: Creating a new product with GCP
  • IoT - from small data to big data: Building solutions with connected devices
  • Security analytics for today's cloud-ready enterprise 
  • Your new super power: Using machine learning to build applications that understand the world
Can’t make it to GCP NEXT in person? Don’t worry, you can live stream GCP NEXT for free. Register here: https://goo.gl/lrjTHV