Quantcast
Channel: All Things Cloud
Viewing all 142 articles
Browse latest View live

How do I size my Cloud Foundry Production Deployment - shekel to the Rescue

$
0
0
Blog Post under construction - Due Date July 17 

Technology Barriers To Moving Apps and Services to a PaaS

$
0
0

Technology Barriers To Moving Apps and Services to a PaaS

Language/Runtime Characteristics: CF only supports languages that have a buildpack that run on Linux. Cloud Foundry currently supports Ubuntu Trusty and CentOS 6.5 on vSphere, AWS, OpenStack, and vCloud infrastructures.

Degree of Statefulness - Runtime state includes caching and Sessions. Coordination of state across peer instances impacts performance. Any application that requires persistent heavy data processing. Apps that require large-scale stateful connections to external services. Use of frameworks that persist client state on the server side like JSF. Apps that keep a large amount of critical runtime state within the JVM i.e. stateful apps are bad candidates to the PaaS. Ideally all persistent state should reside outside the app in a HA data persistence tier.

File System - Cloud app instances are ephemeral. Local file system storage is short-lived. Instances of the same app do NOT share a local file system.Avoid writing state to the local file system. Does the app rely on a persistent local filesystem or storage ?

Transactionality - 2PC Transactional commit does not work across multiple distributed REST services. Prefer BASE(Basically Available, Soft state, Eventually consistent) over ACID(Atomicity, Consistency, Isolation & Durability). Transaction managers typical write transaction logs to local file system and rely on persistent server identity - both of these are not available by design in CF. http://queue.acm.org/detail.cfm?id=1394128

Configuration - Configuration should live external to the deployed app. Config must come from the Environment via environment variables via external configuration server. Apps that have embedded property files or one war/ear file per environment require the refactoring of configuration code before running on the PaaS. We recommend one code base tracked in revision control across multiple deploys. Can you deploy to multiple environments with single codebase.

Web Frameworks : Does the app leverage Java EE or Spring Frameworks and if so are the libraries bundled within the app or come with the app server. If there is strong dependence on an application server like WebSphere to provide dependencies or if proprietary application server APIs are used then that forces the app to use a particular buildpack or worse refactoring to start embedding the libraries within the app.

Startup/Shutdown Characteristics: Prefer apps that start cleanly without a lot of upfront initialization, coordination and shutdown without the need for comprehensive cleanup of state.
Is a startup/shutdown sequence necessary for your app to run effectively. Avoid creating any process instances outside the staged runtime.

Scalability of the App - Does the app rely on X-axis(horizontal duplication, Scale by cloning), Y-axis (functional decomposition, scale by splitting different things) or Z-axis (data partitioning, scaling by splitting similar things) ?. Design app as one or more stateless processes enabling scale-out via process model.Which other dependent services needs to be scaled when the app is scaled ?
Dependencies - How are external dependencies wired into the app ? Are dependencies isolated and explicitly declared. http://microservices.io/articles/scalecube.html

Inbound Protocols - Only a single inbound port is open to an application. Only protocol supported by CF is HTTP/HTTPs/WebSocket. No other protocols like RMI or JMX will work (unless tunneled over HTTP) on the inbound side. HTTPS is terminated at the load balancer. App that relies on transport security at the server will need to be modified. Apps that hold persistent long running socket connections to the server will not work, with apps using WebSocket being the only exception. Any sort of direct peer-to-peer messaging or custom protocol will not work since the warden containers are configured for ingress only over HTTP.

Logging - Are there multiple log streams ? Is there a strong requirement on local persistence of the logs ? Treat Logs of the app as event streams. Prefer console based logging to file based logging. Configure app server logging to log to the console (stdout/stderr) and thereafter drain to a long-term log analytics solution. Logs written to the local file system in CF are not persisted.

Rollback - Immutable code with instant rollback capability. Can the app tolerate multiple versions ? Can the app be reverted to a previous version with a single action ? Does deployment of the app require orchestration of multiple steps.

Security - Does the app rely on network centric or app centric security model ? We recommend relying on on application Layer 7 security rather than network security based on Firewall rules. How are application users authenticated and authorized. Is a Federated Identity and Authorization solution in place ? In CF, outbound egress from the app is controlled by application security groups applied to the warden container. You will need to configure whitelist rules for the services bound and protocols used for outbound communication.

Application Size - cf push times out for apps bigger than 250MB. Staging of apps that exceed a certain size becomes a big drain on resources for network marshalling/unmarshalling as well as runtime memory footprint. Keeping the droplet smaller results in faster deployment. client_max_body_size can be used to changed the max upload size on the Cloud Controller.

Performance & Monitoring - Can the app exhaust CPU under load? Are there any concurrency bottlenecks ? Can the app be instrumented for metrics ? Does the app provide a health check/actuator support in production. Does the app leverage app correlation IDs for distributed requests & responses. Does the underlying platform provide basic monitoring(CPU, Disk, Memory) and  recovery (Spawning of exited instances) and instant(manual/automated) scaling. Can the logging and admin  operations be enabled via runtime switches aka curl requests ?

Developer Experience - Does app architecture and web frameworks facilitate local development and cloud production. Does the app ruin on runtimes that run equally locally and in the cloud. Does the app use (Spring) profiles to encapsulate conditional app. configuration.

Cloud Native Architecture - Does the app compensate for the fallacies of distributed computing by leveraging framework libraries like Netflix OSS and/or SpringCloud OSS ? Does the app leverage zero downtime practices of forward and backward schema compatibility for REST microservices and databases. Can one service be deployed independently of other services. Is lockstep deployment of physical war's needed to manifest a logical app ?

Data Tier Services/Databases - Any persistence employed by the app has to be synchronized across datacenters and availability zones. The data tier/ Stateful tier of the app has to be managed in lock-step with the stateless tier. Focus has to be put on live content and data migration along with CAP theorem issues. App code has to be forward and backward compatible with schema changes in the data tiers.

For more on the exact process i.e. the HOW of migrating apps to the Cloud Foundry check this Pivotal blog post from Josh Kruck http://blog.pivotal.io/pivotal-cloud-foundry/features/how-do-i-migrate-applications-to-pivotal-cloud-foundry

PaaS and the Disruption of the Financial Tech Industry

$
0
0

A number of companies from the financial services industry are looking at Cloud Foundry or already have production deployment of Cloud Foundry. Why is the deployment  and uptake of a platform, particularly strong in Financial Tech ?  The reason is that this is an industry that is in a fervor of disruption and disaggregation. The incumbents have taken notice and are coming to the battle with guns drawn. Take a look at this picture.
http://fermi.vc/post/72559525330/disaggregation-of-a-bank

As we look at the future of  enterprises with CF, as senior consultants and technical leaders it behooves us to paint a picture of the future. For a look at the ideal digital future of this kind of platform driven transformation you need to look no further than the European bank ING who is undergoing/underwent a complete digital transformation. ING CEO Koos Timmermans spoke at the Goldman Sachs European Financials Conference in Rome on June 16/2015. Take a look at this slide deckhttp://www.slideshare.net/ING/goldman-sachs-16-june-2015 in particular slides 6 and 9. Note ING used a home-grown PaaS and not CF. To quote Koos from the slide deck - 
Investment of EUR 200 mln to further simplify, standardize and automate IT: • Decommissioning 40% of application landscape • Moving 80% of applications to zero-touch private cloud.
I also really enjoyed the following four part series from ThoughtWorks where the authors share their experiences and insights on ushering tech. fueled innovation in incumbent financial services organizations.
  1. http://www.thoughtworks.com/insights/blog/innovation-eating-consumer-finance
  2. http://www.thoughtworks.com/insights/blog/great-financial-services-companies-deliver-products-not-projects
  3. http://www.thoughtworks.com/insights/blog/winning-companies-master-strategic-technology
  4. http://www.thoughtworks.com/insights/blog/how-practice-continuous-innovation
  5. http://radar.oreilly.com/2015/05/finance-transformation-oreilly-next-money.html
Much of what these articles state we already know; however the information is provided in the context of the financial tech industry and peppered with customer names/ case studies. These articles propose looking at all IT through a bi-modal view - Strategic and Utility IT. Thereafter they propose running strategic IT via a LEAN methodology in an innovation lab
  • Shift from a project mentality to a product paradigm
  • Break down large-scale, high-risk product development efforts into smaller, low-risk experiments.
  • Provide these teams with the budget discretion to quickly test multiple ideas in parallel.Teams have toc learly define the problem and identify appropriate metrics to measure success.
  • Collaborate early and often with stakeholders from technology, business, marketing, advertising, product, etc.


To achieve this transformation Banks have to affect both technical and cultural change

Cultural Change

  • Pivotal Way -Agile development with pair programming, TDD, Open Cloud Platform
    • From Silos to building shared toolsets, vocabularies, and communication structures in service of a culture focused on a single goal: delivering business value
    • Focus on open code and tooling across AppOps and InfraOps
    • Punctuated Equilibrium to Continous Delivery
    • Centralized Governance to Decentralized Autonomy
    • Business Capability Teams - cross-functional teams that develop products rather than projects & Site Reliability Team

Technical Change

  • Containerization
  • Cloud Native Microservices  Architectures - Systems designed with the fallacies of distributed computing in mind, Mobility, API-based collaboration, Patterns that optimize for speed, safety and scale, Self-service agile Infrastructure
  • Strategic vs Utility IT
  • Strategy For Legacy apps
    • App Decomposition Best Practices
    • Decompose Monoliths
    • Decompose Data

IBM Bluemix & Pivotal Cloud Foundry - A Tale of Two PaaS

$
0
0
Parameters for comparison
- Ramp for Developers
- Devops Experience
- Pushing Docker Images
- Services (Beta and Production)
- Cost for a stress test

Migrating WebSphere Application Server Applications to Cloud Foundry

$
0
0

Migrating WebSphere Application Server Applications to Cloud Foundry

Migration of WebSphere Application Server apps to Cloud Foundry entails dissecting the app for API usage and then configuring the implementation of the same APIs on a servlet container or an alternate open source EE server. My app server of choice for Java EE APIs is Tomcat + Spring Framework or Apache TomEE. 

Migrations of Java EE applications vary in complexity, depending on a series of factors that are identified during the migration:
  • Source and target Java SE specification (JRE 1.5 to JRE 1.8)
  • Source and target Java EE specifications (J2EE 1.4 to Java EE 7)
  • Usage of specific Java components (EJB, JSPs, JSF, etc.,)
  • Usage of proprietary code that is specific for an application server
  • Vendor-specific deployment descriptors
  • Extended capabilities
  • Vendor-specific server configurations (JMS providers and JDBC connection pooling)
  • Class loading issues .
  • Integration with complex external systems such as a CRM or SAP
From a code perspective an application owner has to ask questions related to Java, EJB usage, Servlets & JSPs, Web Services, database access, JMS, JNDI naming, Application trace & logging, UI frameworks, Transactions, Threads, sockets and XML/REST/JSON. The configuration and usage of these technologies will affect the migration of the app to Cloud Foundry. From a deployment perspective the following areas need to be probed: Current hardware, current network edge load balancing solution and expectation around session affinity, expected downtime during upgrade, current administration constructs, Details of the HTTP Server/Proxy Tier. hardware details & security (How exactly is the security of the app and app-server managed). From an architectural perspective to understand the technical barriers and issues in porting apps to cloud native CF platform please read Roadblocks to PaaS.

Process

  1. Get a lay of the land. Identify ALL the open source and proprietary APIs used by the app. Determine what modifications you will need to make to the app if the target runtime packaged by the buildpack does not include the said libraries.

    APIs

    • Make a list of all WebSphere Proprietary APIs used by your app,
    • Make a list of all the Java EE APIs used by your app,
    • Make a list of all Spring Framework APIs used the app,
    • Make a list of all 3rd party open source frameworks used by the app,

    Analysis Tools

    • Leverage static analysis tools like FindBugs, Checkstyle, PMD, Coverity and SonarQube. Write custom rules that will provide a breakdown of the external api usage. From the The Wise Developers' Guide to Static Code Analysis - PMD is a very customizable tool, so the performance and installation outcome really depends on how one has instructed the tool to find problems. Coverity is a commercial tool that it does a much deeper analysis than other tools and it also keeps track of the issues that you’ve found and fixed. SonarQube gives the best combination of usefulness and transparency against the relative complexity of setting it up and solving weird configuration issues.
    • Run Google code search on your app source repository. Code Search is a tool for indexing and then performing regular expression searches over large bodies of source code. It is a set of command-line programs written in Go.
    • Use a modern IDE like Netbeans, Eclipse and IntelliJ to find usage and references of classes and packages in your app code.
  2. Look at all the XML files bundled within the app. These XML Deployment descriptors can be categorized into two categories. Standard discriptors and app-server specific ones. You will need to replace the vendor specific depkoyment descriptors with TomEE descriptors or Spring Beans in Tomcat.
    Java and vendor deployment descriptors and configuration from the WebSphere Application Server V8.5 Migration Guide

  3. If possible disaggregate the EAR file into multiple war files and tie them together as one logical app. This will increase loose coupling and force you to disentangle a web of dependencies that will make the transition to a microservices architecture much easier. You can chose to lift and shift your ear file as-is too since some of the buildpacks below will allow the staging of ear files. Typically it is OK to duplicate code like utility jars across multiple war files it leads to loose coupling and better isolation of function. Note as of today only the Liberty buildpack supports the cf push of ear files. The Tomcat based Java buildpack does not support ear files.
  4. Look at shared libraries configured for the app server across apps. These server scoped shared libraries will need to be bundled in the app when it is pushed to CF.
  5. Take advantage of Java EE6 and collapse EJBs and Servlets in a single war file. No classloader boundaries between Servlets and EJBs. EJBs and Servlets can share all third-party libraries (like Spring!), Use annotations and free your code from deployment descriptor hell.
  6. Do not concern yourself with anything that deals with clustering, workload management, high availability, smart routing or proxying. These concerns are now taken care of by the underlying platform i.e. CF.
  7. Check if there is any native code and ensure that the native library is included with the app and built on the linuxfs32 file system. When porting across Java EE versions please consider the breaking changes across EE specifications detailed in 3.3 J2EE to Java EE migration considerations
  8. Address runtime Migration Concerns:
    • A common issue in migrating JMS resources is that the way each application server configures JMS resources varies from vendor to vendor. Before migrating JMS definition to WebSphere, you should decide which messaging engine to use to exchange messages asynchronously. CF provides multiple JMS & AMQP messaging providers.
    • Reconfiguring the JDBC resources is a common migration step. Use the appropriate JDBC driver to avoid problems with new implementations for new databases. Java Buildpack makes it easy to consume SQL and NoSQL data stores in the app running on the platform. You can auto-configure connections to back-end services or use spring-cloud-connectors to explictly control the JDBC connection to the persistence tier in the cloud.
    • There are several areas where you configure security resources for your migrated applications and the administrative functions and roles, such as authentication mechanisms, authorization mechanisms, and security for web services. Configure the target app server in Cloud Foundry with the correct AAA mechanisms to port all your security apparatus. Apache Tomcat 8 Security Considerations
    • Port JVM arguments, class path, and system properties to the buildpack and cf push manifests in Cloud Foundry.
    • Understand the external factors that affect migration like interaction with external CRM/ERP systems via XML or legacy web-service stacks.
  9. By default, the Tomcat instance provided by the Java Buildpack is configured to store all sessions and their data in memory. If your app relies on session persistence then configure session replication in CF by binding the app to a DB like redis session replication on Cloud Foundry. In some cases the configuration of the app to use the session database may need to be done explicitly based on the service variables injected via the VCAP_SERVICES environment variable. CF by default supports sticky session i.e. honors the JSESSIONID cookie.
  10. If the app does not run on WebSphere Liberty Profile then use the WebSphere Migration Toolkit to get the app running on the WebSphere Liberty Profile. The Liberty Tech Preview tool helps you move WebSphere Application Server full profile applications to Liberty profile, which can be running inside or outside of the cloud. Follow the steps outlined by Cindy High on moving applications to Liberty using the migration toolkit.
  11. You have now come to a fork in the road. You have multiple choices [AppServer(WebSphere | TomEE | JBOSS) * Cloud packager(Buildpack | Docker)] in deploying your app to CF.
    1. Run the app in Cloud Foundry using the WebSphere Liberty Profile Buildpack. WebSphere Liberty Profile provides production support for Java EE7 and requires zero-migration across versions of the app server. The licensing model of running Liberty in production are NOT clear. The way I read the license, is that WebSphere Liberty profile can be used in production on CF for JVM heaps <= 2GB. So this gives you the license to install Liberty and fixes thereafter; not entitlement to support. No serious enterprise can run all their business on 2GB JVM heap. The catch here is that all your apps in total across the org cannot exceed 2 GB in heap. You can share 2GB across as many instances of Liberty as you like as long as the total JVM heap space across the instances does not exceed 2GB. ;) . Basically the first instance is free and thereafter you pay. Of course this licensing does not apply to Bluemix where you can run as many instances as you like as long as you pay Bluemix GB/hr cost. 
    2. Run the app in Cloud Foundry using TomEE. See this blog post from Romain Manni-Bucau that explains how to get the app running on Bluemix which is the IBM derivative of CF. The same technique will work on open source and other distributions of CF. This technique stages and runs the app as an embedded TomEE server within the app. 50 Shades of TomEE explains the many ways an executable fat jar can be cooked with your app and TomEE binaries. Another popular packaging option is to follow recipes and prepare a docker container of the app using the official TomEEDocker image from TomEETribe.  Deployment of Docker image in Cloud Foundry is possible with the Diego BOSH release. Diego supports running of Docker images natively within CF.  The recommendation here is to ONLY package the app in Docker and rely on the platform to inject service metadata. A great guide on packaging apps with Docker and keeping your docker images can be found in Adam Bien's blog JAVA EE 7 + THIN WARS + DOCKER = GREAT PRODUCTIVITY.
    3. Stage and run the app in JBoss now renamed as WildFly, JavaEE certified application server using the jboss-buildpack. Redhat has provided a detailed guide on How-to migrate application running on WebSphere to WildFly was2jboss. If your application is not cloud-native then package and deploy your app using the official WildFly Docker image
    4. Refactor the app based on decomposition recipes outlined by Matt Stine in Migrating to Cloud-Native Application Architectures such as the anti-corruption layer and strangling the monolith. As you refactor the app to a microservices based architecture use SpringCloud OSS to implement the distributed services patterns necessary to overcome the fallacies of distributed computing. A playbook on app refactoring is provided by Josh Kruck here. Run the refactored slmmed down app on a specialized micro-server like Spring-Boot or DropWizard, What we want is a simple, lightweight, fire-and-forget packaging tool for server-side code taking advantage of executable JARs’ simplicity and platform independence. Merging all dependencies into a single JAR can be done with Tools like Maven’s Shade plugin or Gradle’s Shadow plugin. Before deciding on Spring Boot or DropWizard understand the tradeoffs as explained by the good folk at Takipi.
    5. If you do not want to put any effort in refactoring the app since it is of low strategic value and business impact then simply packing the app in a Docker container and pushing it to PCF will allow you to deploy the app to the platform and leverage the operational & cost benefits with very little development cost. Run the IBM WebSphere Application Server Full and Liberty profiles in a Docker Container following documentation in FullWasInDocker and LibertyInDocker. Again Licensing headaches apply :-(. To get started with running in WebSphere go through this IBM InterConnect 2015 session on WebSphere Application Server Liberty Profile and Docker.

    Porting strategies for migrating apps to Cloud Foundry

    $
    0
    0

    Migration Recipes 

    You should consider the following options when lifting and shifting apps to Cloud Foundry:

    1. Deploy app as a VM/Job with BOSH + persistent disk
    2. Dockerize with patterns and Recipes
    3. Lift and Shift JavaEE Full profile apps to TomEE BP, JBOSS BP, Liberty Profile BP, WebLogic BP
    4. Lift and Shift JavaEE Web profile apps to Tomcat Java BP.
    5. Refactor heavily used portions of app with decomposition patterns.
    6. Bootify: Create a self-contained fat jar, including the app server runtime into the app.
    7. Covert app to a managed service by creating a app-service broker that provisions and manages the app. Expose the app as a service bindable to other apps.
    8. Employ VM Templating technologies like TOSCA, vRealize or vCaC brokers to deploy app VMs.
    9. Rewrite app following  12 factor, modern cloud native app architectures

    Apps cannot live without data. In addition to the app-code the persistence tier may also need to be refactored to accommodate the migrated app.

    Key Features in Cloud Foundry that Enable App Migration 

    1. Deigo
    2. CF v3 app-service
    3. TCP Routing
    4. Persistence Support for Deigo
    5. Context Path Routing
    6. Routing service

    Remediations when moving apps to a PaaS


    Anti-Cloud PatternRemedy
    Multiple Inbound protocols like RMI, JMX, Custom-TCPTCP Routing, Tunneling
    Persistence in local VMs like Caches, Transaction logs, CMSExternal managed data services, Persistent volume support, sshfs service, EBS & S3 blob store, mount external NFSTransations
    JTA & 2-pc commitUse standalone transaction managers like Atomikos and Bitronix. Introduce eventual consistency patterns
    Spaghetti ConfigurationExternalize, plugin config via Config Server , init script in .profile.d directory that injects the config. as environment variables
    Logging and MonitoringPunch holes in outbound security groups, CF v3 API, Different process types



    Migrating The Monolith

    Debugging DEA Issues on Cloud Foundry - Halloween Edition

    $
    0
    0
    Ever wondered why Cloud Foundry does not support ssh'ing into the warden container. Such a feature could be useful in so many situations - TechOps, Troubleshooting, Debugging etc. cf-ssh is coming to Cloud Foundry via Diego; however till you deploy Diego in production you will need to live with the process below to ssh into a particular application instance warden container.

    # Step 1 - Find app guid

    C:\Users\Administrator\workspace\FizzBuzz>cf app --guid fizzbuzz
    90291bd7-ce52-43ee-aaa1-ed0405863c4a

    # Step 2 - Find DEA IP address and port


    C:\Users\Administrator\workspace\FizzBuzz>cf curl /v2/apps/90291bd7-ce52-43ee-aaa1-ed0405863c4a/stats
    {
       "0": {
          "state": "RUNNING",
          "stats": {
             "name": "fizzbuzz",
             "uris": [
                "fizzbuzz.kelapure.cloud.pivotal.io"
             ],
             "host": "192.168.200.27",
             "port": 61015,
             "uptime": 1489673,
             "mem_quota": 1073741824,
             "disk_quota": 1073741824,
             "fds_quota": 16384,
             "usage": {
                "time": "2015-09-29 02:36:12 +0000",
                "cpu": 0.003104336638753874,
                "mem": 546877440,
                "disk": 187445248
             }
          }
       }
    }


    # Step 3 - Locate DEA Job that maps to the DEA IP from previous step

    ubuntu@pivotal-ops-manager:~$ bosh vms
    Acting as user 'director' on 'microbosh-38a3a7433db69fa7d159'
    Deployment `cf-938e3d9bec67dbffeacc'

    Director task 178

    Task 178 done


    ubuntu@pivotal-ops-manager:~$ bosh vms --details
    Acting as user 'director' on 'microbosh-38a3a7433db69fa7d159'
    Deployment `cf-938e3d9bec67dbffeacc'

    Director task 179

    Task 179 done

    +----------------------------------------------------------------+---------+--------------------------------------------------------------+----------------+-----------------------------------------+--------------------------------------+--------------+
    | Job/index                                                      | State   | Resource Pool                                                | IPs            | CID                                     | Agent ID                             | Resurrection |
    +----------------------------------------------------------------+---------+--------------------------------------------------------------+----------------+-----------------------------------------+--------------------------------------+--------------+
    | ccdb-partition-0d7a243620d08147fd3a/0                          | running | ccdb-partition-0d7a243620d08147fd3a                          | 192.168.200.15 | vm-df650acf-88e9-4b1a-b68e-a2ff11b47a65 | 1e315067-aef5-4f5d-ad00-50b492f98085 | active       |
    | clock_global-partition-0d7a243620d08147fd3a/0                  | running | clock_global-partition-0d7a243620d08147fd3a                  | 192.168.200.22 | vm-f905d0b1-d4f4-4646-826b-6255e19fc0f0 | 992e57f7-003b-42f2-9818-3db5d4a64402 | active       |
    | cloud_controller-partition-0d7a243620d08147fd3a/0              | running | cloud_controller-partition-0d7a243620d08147fd3a              | 192.168.200.18 | vm-29aeeadb-b9c3-4ca2-ab32-49573e957697 | 413a5a5f-06f4-4359-9074-13d3bbd11e35 | active       |
    | cloud_controller_worker-partition-0d7a243620d08147fd3a/0       | running | cloud_controller_worker-partition-0d7a243620d08147fd3a       | 192.168.200.23 | vm-ded90340-de9f-4d15-8ac4-540669395d8a | 300814cb-6f23-4068-86ba-b6569bedf259 | active       |
    | consoledb-partition-0d7a243620d08147fd3a/0                     | running | consoledb-partition-0d7a243620d08147fd3a                     | 192.168.200.17 | vm-9ef7f4fe-f6a9-4bab-89cb-0942d499e40c | bef26f14-cb91-4bbe-bc1d-3d2d27368ba6 | active       |
    | consul_server-partition-0d7a243620d08147fd3a/0                 | running | consul_server-partition-0d7a243620d08147fd3a                 | 192.168.200.12 | vm-e57e77dd-25c6-4d3f-a440-4dd0b31d28f2 | af13ba63-068f-4ee8-bd8a-48b02438767e | active       |
    | dea-partition-0d7a243620d08147fd3a/0                           | running | dea-partition-0d7a243620d08147fd3a                           | 192.168.200.27 | vm-699ed235-f7f4-4094-a92e-963ef726b1d6 | 437bbc76-7d1c-4ce9-916d-4e6a6355537f | active       |
    | dea-partition-ee97ca1101e7cc2c048a/0                           | running | dea-partition-ee97ca1101e7cc2c048a                           | 192.168.200.28 | vm-91378b2c-7592-4b14-9f73-07c57043fe75 | 561899ad-00d4-4eec-a7f3-a1e8e9c84e10 | active       |
    | doppler-partition-0d7a243620d08147fd3a/0                       | running | doppler-partition-0d7a243620d08147fd3a                       | 192.168.200.29 | vm-7ca4b286-13d9-4d84-aec0-8a1fcdf81cae | 4f243e01-2c89-4f77-8a37-31b8b0c5f1c8 | active       |
    | doppler-partition-ee97ca1101e7cc2c048a/0                       | running | doppler-partition-ee97ca1101e7cc2c048a                       | 192.168.200.30 | vm-150f7c12-2bf9-4da3-bfec-8e342051a203 | f454702e-db36-4e37-a02d-3c44e1c57822 | active       |
    | etcd_server-partition-0d7a243620d08147fd3a/0                   | running | etcd_server-partition-0d7a243620d08147fd3a                   | 192.168.200.13 | vm-fe166f4f-e060-407d-8ba1-74d50f17e22e | 1ea07f41-aa95-4ef0-9a3a-8a7a90886b1c | active       |
    | ha_proxy-partition-0d7a243620d08147fd3a/0                      | running | ha_proxy-partition-0d7a243620d08147fd3a                      | 192.168.200.20 | vm-1fb87e81-d1cd-4abe-82fc-91bddf9e99dc | 2d4a5048-9a9d-4a8f-bf34-0967b8f0bbf5 | active       |
    | health_manager-partition-0d7a243620d08147fd3a/0                | running | health_manager-partition-0d7a243620d08147fd3a                | 192.168.200.21 | vm-9e84e893-22a2-45c4-8f49-bcd65013494a | 7e4ce9e1-d5e9-4ec8-9ba0-2b9885a20fd5 | active       |
    | loggregator_trafficcontroller-partition-0d7a243620d08147fd3a/0 | running | loggregator_trafficcontroller-partition-0d7a243620d08147fd3a | 192.168.200.31 | vm-f3d7075b-f3b6-4ea0-8307-590ee63e7bf3 | 7bf8256d-903f-4ce0-b56e-8f88c00c80ea | active       |
    | loggregator_trafficcontroller-partition-ee97ca1101e7cc2c048a/0 | running | loggregator_trafficcontroller-partition-ee97ca1101e7cc2c048a | 192.168.200.32 | vm-f4ed6afa-2073-47b0-8ae3-d9a08e85a209 | 0df8b5e8-e539-471f-ac0d-4e6d68c79c09 | active       |
    | mysql-partition-0d7a243620d08147fd3a/0                         | running | mysql-partition-0d7a243620d08147fd3a                         | 192.168.200.26 | vm-a73374e9-ffd8-4cd3-8bb4-519d35778143 | c4da718c-0964-4004-82b8-58c3341e8cb9 | active       |
    | mysql_proxy-partition-0d7a243620d08147fd3a/0                   | running | mysql_proxy-partition-0d7a243620d08147fd3a                   | 192.168.200.25 | vm-e94670b3-3754-40f6-b86b-3dcf88d8542e | 06a3c9ae-3e64-4eec-82e7-0c91be97e81c | active       |
    | nats-partition-0d7a243620d08147fd3a/0                          | running | nats-partition-0d7a243620d08147fd3a                          | 192.168.200.11 | vm-9b9855f5-3e5f-4de8-872b-347fbc56c984 | 4e621e1a-718a-4c55-931f-2c7b68504d80 | active       |
    | nfs_server-partition-0d7a243620d08147fd3a/0                    | running | nfs_server-partition-0d7a243620d08147fd3a                    | 192.168.200.14 | vm-a28bad65-4fac-40ac-a383-4b30725966e1 | f02359a8-5c03-4942-ab6c-5ac32c536d6a | active       |
    | router-partition-0d7a243620d08147fd3a/0                        | running | router-partition-0d7a243620d08147fd3a                        | 192.168.200.19 | vm-5b00e6db-669d-4115-b786-5c0b6c5b6c78 | 82111c58-b944-4099-8f56-5ec6b195e54a | active       |
    | uaa-partition-0d7a243620d08147fd3a/0                           | running | uaa-partition-0d7a243620d08147fd3a                           | 192.168.200.24 | vm-aab6ab86-3eca-490d-aa7a-9d720b057721 | 5e95d2f3-f871-4a0c-ae81-ee1effb5fa1a | active       |
    | uaadb-partition-0d7a243620d08147fd3a/0                         | running | uaadb-partition-0d7a243620d08147fd3a                         | 192.168.200.16 | vm-a472fe4e-40d8-4c3f-a95a-e6e8e521cf86 | 336af737-8bc1-4f62-bd40-ae3b63f85637 | active       |
    +----------------------------------------------------------------+---------+--------------------------------------------------------------+----------------+-----------------------------------------+--------------------------------------+--------------+

    VMs total: 22

    In our case the DEA Job is  *dea-partition-0d7a243620d08147fd3a/0*


    # Step 4 Login into the DEA VM

    ubuntu@pivotal-ops-manager:~$ bosh ssh dea-partition-0d7a243620d08147fd3a/0 --public_key y.pub
    Acting as user 'director' on deployment 'cf-938e3d9bec67dbffeacc' on 'microbosh-38a3a7433db69fa7d159'
    Enter password (use it to sudo on remote host): ********
    Target deployment is `cf-938e3d9bec67dbffeacc'

    see https://github.com/cloudfoundry/bosh-lite/issues/134  if stuck on bosh ssh


    # Step 5 Locate the warden container path


    If you grep for `fizzbuzz`and the instance port gleaned from step-2 in the list below you will find the following 

    "warden_container_path": "/var/vcap/data/warden/depot/18tlhc59f3v",

    bosh_yms06qjkj@437bbc76-7d1c-4ce9-916d-4e6a6355537f:/var/vcap/data/warden/depot$ sudo cat /var/vcap/data/dea_next/db/instances.json

    [sudo] password for bosh_yms06qjkj:
    {
      "time": 1443494275.6732097,
      "instances": [

        {
          "cc_partition": "default",
          "instance_id": "07a4a39d7ec14249863303246c73dfa2",
          "instance_index": 0,
          "private_instance_id": "5e15175c18644561828e52534b7c7a71fad5773f13b045058eaed76793618b1f",
          "warden_handle": "18tlhc59erj",
          "limits": {
            "mem": 512,
            "disk": 1024,
            "fds": 16384
          },
          "health_check_timeout": null,
          "environment": {
            "CF_PROCESS_TYPE": "web"
          },
          "services": [

          ],
          "application_id": "093a9e11-0b06-45f1-b3c4-e801ad0aca81",
          "application_version": "2febead7-c0fe-43a5-b4a4-724c5581c037",
          "application_name": "tmfnodetest",
          "application_uris": [
            "tmfnodetest.kelapure.cloud.pivotal.io"
          ],
          "droplet_sha1": "fcba91c3099ba1218febfcb043d3792ccc9bba97",
          "droplet_uri": null,
          "start_command": null,
          "state": "RUNNING",
          "warden_job_id": 461,
          "warden_container_path": "/var/vcap/data/warden/depot/18tlhc59erj",
          "warden_host_ip": "10.254.2.41",
          "warden_container_ip": "10.254.2.42",
          "instance_host_port": 61127,
          "instance_container_port": 61127,
          "syslog_drain_urls": [

          ],
          "state_starting_timestamp": 1442435859.1693246
        },

        {
          "cc_partition": "default",
          "instance_id": "9383f43f0e3549ad9c29c511c5f4211e",
          "instance_index": 0,
          "private_instance_id": "23065b9faddb43d2ae296802c1db8cbdbf2a2d00dd2343849c8825bcfc7eb044",
          "warden_handle": "18tlhc59enu",
          "limits": {
            "mem": 1024,
            "disk": 1024,
            "fds": 16384
          },
          "health_check_timeout": null,
          "environment": {
            "CF_PROCESS_TYPE": "web"
          },
          "services": [
            {
              "credentials": {
                "agent-name": "nginx-hello",
                "host-name": "ca-apm.springapps.io",
                "port": "5001"
              },
              "options": {

              },
              "syslog_drain_url": "",
              "label": "user-provided",
              "name": "ca_apm_10",
              "tags": [

              ]
            }
          ],
          "application_id": "90291bd7-ce52-43ee-aaa1-ed0405863c4a",
          "application_version": "09f093ae-c9e8-46ba-89c8-e56d7b84b671",
          "application_name": "fizzbuzz",
          "application_uris": [
            "fizzbuzz.kelapure.cloud.pivotal.io"
          ],
          "droplet_sha1": "e5fc94cc79eb72489e94c6b620887c5b72244b89",
          "droplet_uri": "http://staging_upload_user:c601ae7f5ae3745d40ee@192.168.200.18:9022/staging/droplets/90291bd7-ce52-43ee-aaa1-ed0405863c4a/download",
          "start_command": null,
          "state": "RUNNING",
          "warden_job_id": 92,
          "warden_container_path": "/var/vcap/data/warden/depot/18tlhc59enu",
          "warden_host_ip": "10.254.0.85",
          "warden_container_ip": "10.254.0.86",
          "instance_host_port": 61015,
          "instance_container_port": 61015,
          "syslog_drain_urls": [
            ""
          ],
          "state_starting_timestamp": 1442004499.0728624

        },

        {
          "cc_partition": "default",
          "instance_id": "fa057152a7094ee6bfd0cd28c8cb76dc",
          "instance_index": 0,
          "private_instance_id": "18b2515c046742ae979fbfa58428120608960bc04f7444759983d1c91621e7da",
          "warden_handle": "18tlhc59f3v",
          "limits": {
            "mem": 1024,
            "disk": 1024,
            "fds": 16384
          },
          "health_check_timeout": null,
          "environment": {
            "CF_PROCESS_TYPE": "web"
          },
          "services": [

          ],
          "application_id": "d9969088-1f7b-40b3-a048-c71814d172c4",
          "application_version": "431e7f3d-774b-4c24-ad4e-c6fda884dab1",
          "application_name": "spring-music",
          "application_uris": [
            "spring-music.kelapure.cloud.pivotal.io"
          ],
          "droplet_sha1": "9e35ddfd97a7c1a32e2a69d5e1c5f90c6d4b7e06",
          "droplet_uri": "http://staging_upload_user:c601ae7f5ae3745d40ee@192.168.200.18:9022/staging/droplets/d9969088-1f7b-40b3-a048-c71814d172c4/download",
          "start_command": null,
          "state": "CRASHED",
          "warden_job_id": 1265,
          "warden_container_path": "/var/vcap/data/warden/depot/18tlhc59f3v",
          "warden_host_ip": "10.254.2.113",
          "warden_container_ip": "10.254.2.114",
          "instance_host_port": 61395,
          "instance_container_port": 61395,
          "syslog_drain_urls": [

          ],
          "state_starting_timestamp": 1443494262.4416354
        }
      ],
      "staging_tasks": [

      ]
    }

    # Step 6 wsh into the warden container

    bosh_yms06qjkj@437bbc76-7d1c-4ce9-916d-4e6a6355537f:/var/vcap/data/warden/depot$ cd 18tlhc59enu

    bosh_yms06qjkj@437bbc76-7d1c-4ce9-916d-4e6a6355537f:/var/vcap/data/warden/depot/18tlhc59enu$ ls
    bin  destroy.sh  etc  jobs  lib  mnt  net_rate.sh  net.sh  run  setup.sh  snapshot.json  start.sh  stop.sh  tmp

    bosh_yms06qjkj@437bbc76-7d1c-4ce9-916d-4e6a6355537f:/var/vcap/data/warden/depot/18tlhc59enu$ sudo ./bin/wsh
    wsh   wshd  

    bosh_yms06qjkj@437bbc76-7d1c-4ce9-916d-4e6a6355537f:/var/vcap/data/warden/depot/18tlhc59enu$ sudo ./bin/wsh
    root@18tlhc59enu:~# ls
    firstboot.sh
    root@18tlhc59enu:~# cd /home
    root@18tlhc59enu:/home# cd vcap/
    root@18tlhc59enu:/home/vcap# ls
    app  logs  run.pid  staging_info.yml  tmp

    root@18tlhc59enu:/home/vcap# ll
    total 40
    drwxr-xr-x 5 vcap vcap 4096 Sep 11 20:48 ./
    drwxr-xr-x 3 root root 4096 Sep 11 20:48 ../
    drwxr--r-- 7 vcap vcap 4096 Sep 11 20:47 app/
    -rw-r--r-- 1 vcap vcap  220 Apr  9  2014 .bash_logout
    -rw-r--r-- 1 vcap vcap 3637 Apr  9  2014 .bashrc
    drwxr-xr-x 2 vcap vcap 4096 Sep 11 20:47 logs/
    -rw-r--r-- 1 vcap vcap  675 Apr  9  2014 .profile
    -rw------- 1 vcap vcap    3 Sep 11 20:48 run.pid
    -rw-r--r-- 1 vcap vcap 2000 Sep 11 20:47 staging_info.yml
    drwxr-xr-x 3 vcap vcap 4096 Sep 11 20:48 tmp/
    root@18tlhc59enu:/home/vcap#



    # Step 7 - Now you are free to ftp files out or take heap dumps or thread-dumps

    For instance to take a heapdump
    root@18tlhc59enu:/home/vcap/app/.java-buildpack/open_jdk_jre/bin# ll
    total 384
    drwxr-xr-x 2 vcap vcap   4096 Sep 11 20:47 ./
    drwxr-xr-x 5 vcap vcap   4096 Sep 11 20:47 ../
    -rwxr-xr-x 1 vcap vcap   8798 Jul 16 09:29 java*
    -rwxr-xr-x 1 vcap vcap   8909 Jul 16 09:29 jcmd*
    -rwxr-xr-x 1 vcap vcap   8909 Jul 16 09:29 jjs*
    -rwxr-xr-x 1 vcap vcap   8973 Jul 16 09:29 jmap*
    -rwxr-xr-x 1 vcap vcap   8981 Jul 16 09:29 jstack*
    -rwxr-xr-x 1 vcap vcap   8917 Jul 16 09:29 keytool*
    -rwxr-xr-x 1 vcap vcap   1146 Sep 11 20:47 killjava.sh*
    -rwxr-xr-x 1 vcap vcap   8981 Jul 16 09:29 orbd*
    -rwxr-xr-x 1 vcap vcap   8917 Jul 16 09:29 pack200*
    -rwxr-xr-x 1 vcap vcap   8917 Jul 16 09:29 policytool*
    -rwxr-xr-x 1 vcap vcap   8909 Jul 16 09:29 rmid*
    -rwxr-xr-x 1 vcap vcap   8917 Jul 16 09:29 rmiregistry*
    -rwxr-xr-x 1 vcap vcap   8917 Jul 16 09:29 servertool*
    -rwxr-xr-x 1 vcap vcap   8989 Jul 16 09:29 tnameserv*
    -rwxr-xr-x 1 vcap vcap 217462 Jul 16 09:29 unpack200*

    root@18tlhc59enu:/home/vcap/app/.java-buildpack/open_jdk_jre/bin# su vcap

    vcap@18tlhc59enu:~/app/.java-buildpack/open_jdk_jre/bin$ PID=` ps -ef | grep java | grep -v "bash\|grep" | awk '{print $2}'`
    vcap@18tlhc59enu:~/app/.java-buildpack/open_jdk_jre/bin$ echo $PID
    29

    vcap@18tlhc59enu:~/app/.java-buildpack/open_jdk_jre/bin$ ./jmap -dump:format=b,file=/home/vcap/app/test.hprof $PID
    Dumping heap to /home/vcap/app/test.hprof ...
    Heap dump file created

    vcap@18tlhc59enu:~/app/.java-buildpack/open_jdk_jre/bin$ ls -al /home/vcap/app
    total 230616
    drwxr--r-- 7 vcap vcap      4096 Sep 29 03:06 .
    drwxr-xr-x 5 vcap vcap      4096 Sep 11 20:48 ..
    drwxr-xr-x 5 vcap vcap      4096 Sep 11 20:47 .java-buildpack
    -rw-r--r-- 1 vcap vcap     82155 Sep 11 20:47 .java-buildpack.log
    drwxr--r-- 3 vcap vcap      4096 Sep 11 20:46 META-INF
    drwxr--r-- 3 vcap vcap      4096 Sep 11 20:46 my-resources
    drwxr--r-- 3 vcap vcap      4096 Sep 11 20:46 org
    -rw------- 1 vcap vcap 236032777 Sep 29 03:06 test.hprof
    drwxr--r-- 5 vcap vcap      4096 Sep 11 20:46 WEB-INF


    Spring Boot Activator metrics collection in a spreadsheet

    $
    0
    0
    If your app is a spring boot app that has the actuator enabled then use this nifty script from Greg Turnquist's Learning Spring Boot book with some changes from me to collect all the metrics in a csv


    package learningspringboot

    @Grab("groovy-all")

    import groovy.json.*

    package learningspringboot


    @Grab("groovy-all")

    import groovy.json.*

    @EnableScheduling

    class MetricsCollector {

        def url = "http://fizzbuzz.cfapps.io/metrics"

        def slurper = new JsonSlurper()
        def keys = slurper.parse(new URL(url)).keySet()
        def header = false;
        @Scheduled(fixedRate = 1000L)
        void run() {
            if (!header) {
                println(keys.join(','))
                header = true
            }

            def metrics = slurper.parse(new URL(url))


            println(keys.collect{metrics[it]}.join(','))

        }
    }



    How do you run this script ?

    To start collecting stats in the CSV:
    1. First install the spring cli following instructions here - http://docs.spring.io/autorepo/docs/spring-boot/1.1.4.RELEASE/reference/html/getting-started-installing-spring-boot.html#getting-started-installing-the-cli
    2. Thereafter save the snippet above as metrics.groovy
    3. On windows Install - wintee Run the script on windows with spring run -q metrics.groovy | wintee metrics.csv
    4. Run the script on unix with spring run -q metrics.groovy | tee metrics.csv
    Happy Debugging!!



    Chasing Cloud Foundry OutOfMemory Errors - OOM

    $
    0
    0
    If you are ever unfortunate enough to troubleshoot application, java heap or native process OOM issues in Cloud Foundry follow the playbook below to get to the root cause:

    1. Include the attached script dump.sh at the root of your JAR/WAR file. You can edit the LOOP_WAIT variable in the script to configure how often it will dump the Java NMT info. I'd suggest somewhere between 5 and 30 seconds, depending on how long it takes for the problem to occur. If the problem happens pretty quick, go with a lower number. If it takes hours, then go with something higher.

    2. Make a .profile.d directory, also in the root of the JAR / WAR file. For a detailed explanation on using .profile.d to profile native memory checkout this note from CF support engineer Daniel Mikusa.

    3. In that directory, add this script.
    #!/bin/bash
    $HOME/dump.sh &

    This script will be run before the application starts. It starts the dump.sh script and backgrounds it. The dump.sh script will loop and poll the Java NMT stats, dumping them to STDOUT. As an example see the simple-java-web-for-test application. There is also an accompanying load plan here

    4. Add the following parameters to JAVA_OPTS: 
    JAVA_OPTS: "-XX:+PrintGCDateStamps -XX:+PrintGCDetails -Xloggc:./jvm-gc.log -XX:NativeMemoryTracking=detail"
    The -XX:NativeMemoryTracking will enable native OOM analysis [1]. More on this later. The -Xloggc will allow you to pipe all GC output to a log that you can later analyze with tool like PMAT or GCMV

    5. Push or restage the application.

    6. In a terminal open cf logs app-name > app-name.log. That will dump the app logs & the Java NMT info to the file. Please try to turn off as much application logging as possible as this will make it easier to pick out the Java NMT dumps.

    7. Kick off the load tests.


    The nice thing about Java NMT is that it will take a snapshot of the memory usage when it first runs and every other time we poll the stats we'll see a diff of the initial memory usage. This is helpful as we really only need the last Java NMT dump prior to the crash to know what parts of the memory have increased and by how much. It's also nice because it gives us insight into non-heap usage. Given the Java NMT info, it should be easier to make some suggestions for tuning the JVM so that it doesn't exceed the memory limit of the application and cause a crash.

    8. If you have the ability to ssh into the container use the following commands to trigger heapdumps and coredumps
    JVM Process Id:
    PID=` ps -ef | grep java | grep -v "bash\|grep" | awk '{print $2}'`
    Heapdump:
    ./jmap -dump:format=b,file=/home/vcap/app/test.hprof $PID
    Coredump:
    kill -6 $PID  should produce a core file and leave the server running
    Analyze these dumps using tools like Eclipse Memory Analyzer and IBM HeapDump Analyzer 

    9. If you have the ability to modify the application then enable the Spring Boot Actuator feature for your app if it is Boot app else integrate a servlet or script like  DumpServlet  and HeapDumpServlet into the app 

    Salient Notes:
    - The memory statistics reported by CF in cf app is in fact  used_memory_in_bytes which is a summation of  rss and the active and inactive caches. [2] and [3]. This is the number watched by the cgroup linux OOM killer. 
    - The Cloud Foundry Java Buildpack by default enables the following parameter -XX:OnOutOfMemoryError=$PWD/.java-buildpack/open_jdk_jre/bin/killjava.sh
    Please do NOT be lulled into a false sense of complacency by this parameter. I have never seen this work in production. Your best bet is to be proactive when triggering and pulling dumps using servlets, JMX, kill commands, whatever ... 

    References:

    Method to get to the root cause of a memory leak in the Cloud

    $
    0
    0
    From an application developer perspective our apps are increasingly run in russian doll deployments i.e. app code wrapped in a app server container which is wrapped by a LXC container running within a VM , running on one or multiple hypervisors running on bare metal. In such a scenario determining the root cause of memory leaks becomes difficult. Please find below a process that could be used to get to the eureka moment.

    The basic principle to get to root cause is that of eliminating all the variables one by one. We start at the top of the stack and work our way down. Remember the JVM is like an iceberg. There is java heap that is above the water and an unbound native memory portion underneath the surface.  Java heap OutOfMemory errors are easier to fix than native memory leaks. Native leaks are generally caused by errant libraries/frameworks, JDKs, app-servers or some unexplained OS-container-JDK interaction.

    ok lets get to it ... 

    First establish a process to measure the JVM heap and native process size perhaps using a dump script like https://github.com/dmikusa-pivotal/cf-debug-tools#use-profiled-to-dump-the-jvm-native-memory. Remember to take heapdumps before during and after the load run. Once the test is close to completion take native process core dumps using kill -6 or kill -11. This procedure of running the test is then repeated as you eliminate each variable below.

    1. [app] First look at the application source for the usual memory leak anti-patterns like DirectByteBuffers, threadlocal, Statics, Classloader retention, resource cleanup etc. This is where you will get the maximum bang for the buck. Take a heapdump of the JVM process and analyze in EclipseMemoryAnalyzer of HeapAnalyzer.

    2. [jdk] Eliminate the JDK as a factor of the leak by switching JVM implementations i.e. moving from OpenJDK to Hotspot or from OpenJDK to IBM JDK etc .. see the entire list of JVM impls https://en.wikipedia.org/wiki/List_of_Java_virtual_machines.

    3. [app-server] If simple eyeballing does not help then switch the app-server i.e. move from tomcat to jetty, undertow to tomcat. If your app runs on WebSphere or WebLogic and cannot be ported then my apologies. Call 1-800-IBM-Support.

    4. [container] If your droplet (app + libraries/frameworks + jvm) is running within a container in Cloud Foundry or Docker then try switching out the containers. i.e. If running within the warden container then run the same app within Docker container. Try changing Docker base images and see if the leak goes away.

    5. [hypervisor] If running on AWS switch to OpenStack or vSphere and vice versa. You get the idea. Cloud Foundry makes this easy since you can standup the same CF deployment on all three providers.

    6. [bare-metal] Run the app on the bare metal server to check if the leak persists.

    7. [sweep-under-the-rug] Once you are ready to pull your hair out, resort to tuning the JDK. Start playing with JVM options like -Xms234M  -Xmx234M -XX:MetaspaceSize=128M  -XX:MaxMetaspaceSize=128M -Xss228K. In cloud foundry this is set by the memory_calculator that is influenced by setting the memory_heuristics and memory_sizes env vars.
    • JBP_CONFIG_OPEN_JDK_JRE: '[memory_heuristics: {heap: 55, metaspace: 30}, memory_sizes: {metaspace: 4096m..}]'
    • JBP_CONFIG_OPEN_JDK_JRE: '[memory_calculator: {memory_sizes: {stack: 228k, heap: 225M}}]
    • JBP_CONFIG_OPEN_JDK_JRE: '[memory_calculator: {memory_heuristics: {stack: .01, heap: 10, metaspace: 2}}]'
    As you can see, the options become increasingly cumbersome as you keep doing down this list. Your best hope is to catch it at 1, 2 or 3. Good Luck Hunting!

    Boiler plate apps for all buildpacks in Cloud Foundry

    $
    0
    0
    If you are tired of searching for sample apps of different types that exercise all the buildpacks in Cloud Foundry, the list of boilerplate apps below comes to your rescue. Most of these apps need a backing service like MySQL which exercises the service binding code in CF and your app.

    List of all the Buildpacks supported by CF: http://docs.cloudfoundry.org/buildpacks/

    java_buildpack: Spring-Music : Allows binding based on profiles to mysql, postgres, in-memory, etc., https://github.com/pivotalservices/spring-music

    go_buildpack: pong_matcher_go: This is an app to match ping-pong players with each other. It's currently an API only, so you have to use curl to interact with it. Requires mysql
    https://github.com/cloudfoundry-samples/pong_matcher_go

    nodejs_buildpack: node-tutorial-for-frontend-devs: Node.js sample app with mongodb backend:
    https://github.com/cwbuecheler/node-tutorial-for-frontend-devs

    php_buildpack: PHPMyAdmin : out-of-the-box implementation of PHPMyAdmin 4.2.2. Requires mysql
    https://github.com/dmikusa-pivotal/cf-ex-phpmyadmin

    binary_buildpack: pezdispenser: Admin portal for Cloud Foundry
    https://github.com/pivotal-pez/pezdispenser

    ruby_buildpack: Rails app to match ping-pong players with each other. Requires mysql.

    python_buildpack: Buildpack uses pip to install dependencies. Needs a requirement.txt
    PyData app: https://gist.github.com/ihuston/d6aab5e4a811fe582fa7  Does not use pip. Uses conda. 

    staticfile_buildpack: Put a Staticfile in any directory and do a cf push. If directory browsing is needed Add a line to your Staticfile that begins with directory: visible

    Spring Cloud services Tile:
    - https://github.com/spring-cloud-samples/fortune-teller
    - https://github.com/dpinto-pivotal/cf-SpringBootTrader

    .NET sample app: Contoso University

    Heapdumps and CoreDumps on Cloud Foundry containers

    $
    0
    0
    SSH into the warden container following instructions here and follow the instructions below ...

    Heapdumps:
    su vcap
    PID=` ps -ef | grep java | grep -v "bash\|grep" | awk '{print $2}'`
    jmap -dump:format=b,file=/home/vcap/app/test.hprof $PID

    Coredumps:
    I played with this some more and the process below worked for generating core dumps:

    ulimit -c unlimited
    gdb --pid=33
    Attaching to process 33
    Reading symbols from /home/vcap/app/.java-buildpack/open_jdk_jre/bin/java...(no debugging symbols found)...done.
    Reading symbols from /home/vcap/app/.java-buildpack/open_jdk_jre/bin/../lib/amd64/jli/libjli.so...(no debugging symbols found)...done.
    Loaded symbols for /home/vcap/app/.java-buildpack/open_jdk_jre/bin/../lib/amd64/jli/libjli.so
    ...
    Reading symbols from /home/vcap/app/.java-buildpack/open_jdk_jre/bin/java...(no debugging symbols found)...done.
    Reading symbols from /home/vcap/app/.java-buildpack/open_jdk_jre/bin/../lib/amd64/jli/libjli.so...(no debugging symbols found)...done.
    Loaded symbols for /home/vcap/app/.java-buildpack/open_jdk_jre/bin/../lib/amd64/jli/libjli.so
    ...
    (gdb) gcore


    (gdb) quit
    A debugging session is active.
    Inferior 1 [process 33] will be detached.
    Quit anyway? (y or n) y
    Detaching from program: /home/vcap/app/.java-buildpack/open_jdk_jre/bin/java, process 33

    root@18tlhc59f7e:~# ls -al
    total 1236312
    drwx------  2 root root       4096 Oct 12 14:19 .
    drwxr-xr-x 33 root root       4096 Oct 12 14:19 ..
    -rw-------  1 root root        985 Oct  6 04:47 .bash_history
    -rw-r--r--  1 root root       3106 Feb 20  2014 .bashrc
    -rw-r--r--  1 root root 1265958640 Oct 12 14:20 core.33
    -rwxr-xr-x  1 root root        213 Jul 24 21:08 firstboot.sh
    -rw-r--r--  1 root root        140 Feb 20  2014 .profile

    Note kill -SIGABRT 33 did not work. 

    In miscellaneous notes everyone should read the following article on memory management in CF containers http://fabiokung.com/2014/03/13/memory-inside-linux-containers/
    Most of the Linux tools providing system resource metrics were created before cgroups even existed (e.g.: free and top, both from procps). They usually read memory metrics from the proc filesystem: /proc/meminfo,/proc/vmstat/proc/PID/smaps and others. Unfortunately /proc/meminfo/proc/vmstat and friends are not containerized
    Most container specific metrics are available at thecgroup filesystem via /path/to/cgroup/memory.stat,/path/to/cgroup/memory.usage_in_bytes,/path/to/cgroup/memory.limit_in_bytes and others. 




    Tuning the default memory setting of the Java Buildpack to avoid OOMs

    $
    0
    0
    Credit for this blog post largely goes to Daniel Mikusa from Pivotal Support and others who have debugged and diagnosed a number of OOM issues recently with JDK8 and Cloud Foundry Java Buildpack.

    Situation

    We have seen significant improvement in the performance of long runs for certain enterprise apps after decreasing the amount of memory allocated to thread stacks and increasing the native i.e. metaspace of the container. If you are seeing your app running OOM then make the following changes to the memory of the Java Buildpack:

    1.) Lower the thread stack size. This defaults to right around 1M per thread, which is more than most threads every need. I usually start by lowering this to 228k per thread, which is the JVM minimum. This will work fine unless you've got apps that do lots of recursion. At any rate, if you see any StackOverFlowExceptions, then just bump up the value until they go away.

    2.) The default allocation of memory weights  is 75 heap and 10 native.  I suggest starting with 60 heap and 25 native. That's probably a bit conservative, but it should leave more free room in the container which should help to prevent crashes. Lowering the heap to 60 will lower the total amount of heap space that the app can use. If this causes problems with the app, like OOME's then you might need to raise the memory limit on the container instead. Once adjusted,  we should see between 100 and 150M of free memory in the container. 

    In other words for JDK8, 
    MEMORY_LIMIT - ( HEAP + METASPACE + 100M ) should be between 100 & 150M.

    The problem that most customers hit when moving their Java apps to CF is that they've never had a hard limit on total system memory before. Deploying to WAS/WebLogic/JBOSS (or any other application container) is going to be done on a system with swap space. That means if they don't have their memory settings quite right, the worst thing that will happen is that they use a bit of swap space. If it's just a little swap, they probably won't even notice it happening. With CF, there's much less forgiveness in the system. Exceeding the memory limit by even a byte will result in your app being killed.

    How to Fix this ?

    Please set the JBP_CONFIG_OPEN_JDK_JRE or the JBP_CONFIG_OPEN_JDK_JRE env var. and restage the app


    cf set-env my-app JBP_CONFIG_ORACLE_JRE_MEMORY_HUERISTICS '{heap: 60, native: 25, stack:"228k”}'

    cf set-env my-app JBP_CONFIG_OPEN_JDK_JRE: '[memory_calculator: {memory_sizes: {stack: 228k, heap: 225M}}]

    cf se my-app JBP_CONFIG_ORACLE_JRE '[memory_calculator: {memory_sizes: {stack: 228k}, memory_heuristics: {heap: 60, native: 25}}]’

    For versions of JBP pre 3.3
    cf set-env my-app JBP_CONFIG_OPEN_JDK_JRE: '[memory_calculator: {memory_heuristics: {stack: .01, heap: 10, metaspace: 2}}]’

    Script For Parsing NMT

    If you do happen to enable -XX:NativeMemoryTracking then use the script below to graph and generate charts of memory growth:


    and here are some quick docs for it.


    There's one issue at the moment, the process id that the script looks for in the top output is hard coded to 35.  I think that it's usually 35, because of the order that the processes start up, but that might not always be the case.  You can manually find the pid in the log and change the script for now.  I will try to add to the script to automatically put that, but haven't had the time yet.

    Are my apps worthy of the Cloud ?

    $
    0
    0
    From time to time we get questions on what approaches to use to determine application suitability to the cloud. 

    In my view the best way to do this is to form a set of technical or business heuristics that are important to your enterprise and then grade the app portfolio on those criteria. Form a pool of 10 apps and begin app replatforming and modernization.

    The most important thing about this scoping exercise is that it should 1. be time scoped to a day and 2. involve all the stakeholders (business, developers, architects, testers).

    What you should NOT do is create a spreadsheet and rate apps on various criteria. Such an effort is destined to fail and get mired in analysis paralysis.

    App migration is like training for a marathon or weight training. You start with smaller weights and challenges and then ramp up to the longer distances and higher class weights.

    In the following screenshots I am going to display a set of heuristics that companies have used successfully to perform app migration to the cloud.

    1. Technical Feasibility 

    http://redmonk.com/jgovernor/2015/11/10/cloud-native-is-nice-and-all-but-how-do-get-there

    2. Business Feasibility

    Differentiating– Clean Code, Now. Move to Microservices.
    Parity– Good-enough software. Run on VMs.
    Partner– With a little help from my friends. Outsource Refactoring.
    Who Cares– Shut it down, shut it down. Phase out.
    Niel Nickolaisen Purpose Alignment Model.


    3. Technical and Business Feasibility

    Impact here refers to the disruption to the business. In risk averse organization risk to business becomes an important metric. The order of app re-platforming here is Q1, Q2, Q4, Q3.


    Evolution of the ESB to the Cloud In 5 Steps

    $
    0
    0

    Background

    The rising popularity of microservices as an architecture has led to several questions around the role of the ESB. Traditionally the role of the ESB was to fulfill the promise of SOA by providing invocation, routing, mediation, messaging, process choreography, service orchestration, complex event processing, management, agnosticism, support for various message exchange patterns, adapters, transformation, validation, governance, enrichment, support for WS-* standards and abstraction. One can easily see how such a software can easily become the central bottleneck for all things enterprise aka the God object.
    The term 'central' can also be used to reference the architectural style, namely everything had to be routed through the ESB. Not having to do that but still support message exchange patterns via Spring Integration's approach was a significant' distinction and is now even more important. The microservices approach favors dumb pipes and smart endpoints whereas an ESB works on an canonical data representation leading to dumb endpoints and smart pipes. There is an inherent conflict between microservices and ESBs. An ESB centric app architecture results in anemic services that are integrated via a smart centralized ESB.
    The Microservice architecture pattern is SOA without the commercialization and perceived baggage of WS death star and an ESB. Furthermore creation of a SOA Center Of Excellence for Application Integration promoted a culture of centralized integration in the enterprise. Microservice-based applications favor simpler, lightweight protocols such as REST, rather than WS-*. They also very much avoid using ESBs and instead implement ESB-like functionality in the microservices themselves. The Microservice architecture pattern also rejects other parts of SOA, such as the concept of a canonical schema and an unified domain model across bounded contexts.. [chris-richardson].

    Why Microservices

    The fundamental tenets of Microservices like modularization, cohesiveness, DRYness, loose coupling, replaceability etc., have been widely recognized in the software industry since the 70s.  The recent popularity of microservices stems from lack of independent deployability, independent scalability  and independent failure of systems. A new generation of distributed system platforms and programming models  has emerged to  address these issues i.e. Microservices [martin-fowler]. Most enterprise systems over time evolve to Big Balls of Mud [balls-of-mud]. Beyond a certain scale making modifications to this spaghetti code jungle becomes untenable. Microservices gives us a structure approach to shear these monoliths and develop and deploy them in an agile fashion. Decomposing a monolith requires patterns like the Strangler, Inverse-Conway,  Facade, Adapter, smart routing, feature-flags, zero-downtime-deployment and Proxy. Cloud Foundry natively bakes these concerns into the platform.
    Integration tools have historically been the domain of the specialists working in the Integration Competency Center. They have required distinct skill sets and domain knowledge – from enterprise application integration and enterprise service buses and services oriented architecture to extract, transformation and load and enterprise data warehousing and business intelligence.  The citizen model of integration refers to the process of democratization of integration workflows upending traditional approaches to data and application integration put together by citizen developers with no specialized domain knowledge driven by the demand for self-service from the business. [rise-citizen-integrator]
    The emergence of PaaS and iPaaS platforms is a consequence of the need for reduced business software time to value. Features need to be delivered in a time span of weeks instead of years. The Third Platform makes this possible by baking into its foundation microservice management capabilities like service discovery, routing, load balancing, application lifecycle management and operational capabilities like canary releases, zero downtime deployment,automation, monitoring and security. Many of the concerns taken care of by the ESB are now usurped by the PaaS and provided to all the applications transparently at web scale. Moreover these platforms are truly open to extension and can be deployed using next generation tooling with BOSH, Ansible etc.


    People & Process

    A successful deployment of microservices is contingent on people, process and technology. The development of microservices is done with the agile methodology. The roots of agile can be found in Mel Conway's seminal work "How Do Committees Invent"mel-conway in 1967.
    Conway's First Law: A system’s design is a copy of the organization’s communication structure. Conway's first law tells us TEAM SIZE is important. Communication dictates design. Make the teams as small as necessary.
    Conway's Second Law: There is never enough time to do something right, but there is always enough time to do it over. Conway's second law tells us PROBLEM SIZE is important. Make the solution as small as necessary.
    Conway's Third Law: There is a homomorphism from the linear graph of a system to the linear graph of its design organization. Conway’s Third law tells us CROSS-TEAM INDEPENDENCE is important. A more modern variant of the same covenant is the ‘two pizza rule’ invented by Jeff Bezos i.e.  teams shouldn’t be larger than what two pizzas can feed.
    Conway's Fourth Law: The structures of large systems tend to disintegrate during development, qualitatively more so than with small systems. Conway’s fourth law tells us that TIME is against LARGE teams therefore it is critical to make release cycles short and small.
    The ESB development model of design by committee is at odds with the fundamental tenets of agile and lean development i.e. smaller decentralized teams and faster release cycles. ESB development requires specialized knowledge and does not support rapid continuous integration and development.
    The O-Ring Theory when applied to Devops states that when production depends on completing a series of tasks, failure or quality reduction of any task reduces the value of the entire product. [O-ring-theory]. Another interesting finding is that, the better you are the more value you get from improving your weakness and conversely if you are fairly poor across the board you won’t get as high a ROI on an improvement in one specific area.  In order to boost productivity it is critical to pursue excellence across all IT processes.  

    Challenges With Microservices

    When a business system is designed as a suite of microservices that prefer choreography over orchestration it behooves the advocates of the architecture to provide best practices and patterns to implement the abstraction. This is where microservices falls short. The programming frameworks for microservices choreography such as event sourcing and CQRS are woefully behind in terms of maturity and production capability with some notable exceptions like axon or akka-persistence.
    An ESB provides a visual mode of development that allows domain modelers and business developers an opportunity to design and validate a business process without getting their hands dirty with code. A cross-bar architecture allows architects to enforce a central point of governance and control over design and implementation. DIY Integration frameworks lack the extensive user and process modeling tools that traditional ESB vendors provide. This is a shortcoming in the DIY Integration ecosystem with Spring Integration.
    Features of the ESB like business activity monitoring and event correlation now have to be implemented in applications or supplied as cross-cutting libraries to applications. ESB has a whole suite of connector services and adaptors that will need to plugged in for an individual service instead of harvesting from a common core.
    A mature enterprise has a fully developed and very mature suite of ESB services that makes development of features with ESBs easy and fast. The pain of development, training and deployment has been amortized over the years to the point where developing features with the ESB may be equivalent to writing a suite of microservices.


    Migration from ESBs to Cloud Native Platforms

    There are 5 phases to evolving an ESB infrastructure and accompanying business services to the cloud.

    Phase 0 : Co-exist

    Deploy the ESB alongside the PaaS. ESB services are exposed as user provided external services to the PaaS. This allows the enterprise to deploy apps to a platform like Cloud Foundry and keep its existing backend services in an ESB like TIBCO. The advantages here is the ability to move the web tier to the next generation platform without disrupting existing integration flows.

    Phase-0.png

    Phase 1 : Lift And Shift

    Deploy the ESB in the PaaS. Increasingly ESB vendors have moved their offerings to the cloud. See TIBCO Business Works Cloud Foundry Edition or mulesoft iPaaS. In fact Gartner defines this as an entirely new category called iPaaS i.e. a suite of cloud services enabling development, execution and governance of integration flows connecting any combination of on premises and cloud-based processes, services, applications and data within individual or across multiple organizations.
    The Cloud Foundry edition of TIBCO allows individual TIBCO business processes to be natively scaled and managed by Cloud Foundry. From a Cloud Foundry perspective the TIBCO business process is treated as a new language with its own buildpack. This is a reasonable middle ground between ESBs and Microservices. There are two issues with a cloud enabled ESB like TIBCO, the ESB needs to leverage native messaging capabilities (like FTL) that are not available natively in the PaaS. The ESB needs to leverage the managed data backing services natively available on the platform.
    Phase-1.1.png

    Phase 2 : Refactor

    A refactoring of the ESB services offered by the ESB such that the ESB layer is kept as thin as possible. At this point there are many opportunities to migrate from the ESB to Spring Integration to achieve basic transformation, mediation, and routing. Consumer Services Expose APIs with formats/protocols desired by consumers. Consumer services transform formats provide sync/async, protocols and composite workflows. Provider Services integrate with legacy IT systems, implement required workflow and consume whatever protocol/format is exposed by legacy systems. Refactor services incrementally per bounded context, prioritizing the ones that are new or under active development.
    Standardize consumer APIs over REST/JSON and reduce the cross-bar architecture complexity by not entertaining a variety of consumer preferences. Extend bounded contexts to include legacy app dependencies. For high performance data intensive workloads the cost of serialization/deserialization of messages from the enterprise bus may be too high. Consider a binary serialization mechanism like google protocol buffers or Kyro for internal messaging, APIs, caching, etc across services.
    Leverage Spring Flo a graphical UI for design and visualization of streaming and batch data pipelines. Flo allows you to create real-time streaming and batch pipelines either with the textual shell or the drag and drop interface Composed Batch Job Orchestration using Flo for Spring XD.
        

    Phase 3 : Replace

    Replacement of ESB services using integration pattern frameworks like Spring Cloud Stream. Spring Integration and its cloud successor Spring Cloud Stream provide successively higher layers of abstraction making it possible to write ESB flows and function using cloud native architectural constructs leveraging annotations and interfaces to specify business function. Spring Cloud Stream is a combination of Spring Cloud, Spring Boot and Spring Integration where integration modules are Spring Boot apps. The plumbing of putting together channel adapters and flows over message brokers like Kafka and RabbitMQ is done intelligently by the platform. Spring-cloud-dataflow can also orchestrate jobs so batch-processing and stream-processing are all manageable in a consistent way on a unified platform. Spring cloud "tasks" can handle spring-batch jobs as well as simple one-off runnable tasks. This approach does not rely on IDE tools to design the services and an additional step to deploy. Development of integration services is the same as development of any other business logic which eliminates the traditional impedance mismatch associated with ESBs and allows for a faster iteration of feature code. The citizen integration framework like Spring Integration provides developers a familiar Java developer's way to compose and operate on services.
    One language (Java), one runtime (Tomcat) and one CI/CD process results in the holy grail of reduced time to value and increased efficiency. Both Apache Camel and Spring Integration have a vast number of components, modules and adapters to legacy systems that can aid in this transformation. REST APIs from zConnect should be leveraged natively on the mainframe where possible ditching the man-in-middle legacy adapter layer if possible. z/OS Connect enables z/OS systems such as CICS and IMS to provide RESTful APIs and accepts transactional JSON payloads via the WebSphere Liberty app server.
    Apache Camel is in the same category as Spring Integration in terms of being a lightweight framework alternative to ESBs. Apache Camel like Spring Integration is a low level toolbox of components that can be used to implement EAI patterns. Unlike Spring Cloud Stream Apache Camel does not provide higher level microservice abstractions for designing flows other than a DSL.

    Phase 4 : Transform

    After phase 2 and 3 you are leveraging a lot of Integration patterns aka EAI constructs. Any ESB albeit even light weight cloud native ones inevitably result in context bleeding and a proliferation of interchange contexts resulting from sharing a canonical data model. Entities and Aggregates are outside their Bounded Context or even worse replicated outside, endorsed in a more generic representation, where the integration team has put an extra layer of complexity on top of it.enterprise-arch-in-practice.
    Phase 4 is doing away with EAI patterns and leveraging a pure event driven messaging architecture. Decomposition of ESB integration services based on purpose alignment model to higher order vertically sliced domain based service. The intent is to make each service independent. The service provides a smart endpoint. The service fulfills a contract for a bounded context and is complete in all respects. Process orchestration is replaced by choreography. The Spring Cloud framework provides an umbrella of Netflix OSS tools that should be used when writing distributed systems. The service implemented with the help of Spring cloud and the platform are responsible for providing all the qualities of service provided earlier by the ESB. A fabric of independently operating coarsely grained microservices that are choreographed to achieve business function is the ultimate manifestation of the Domain Driven microservice philosophy.
    In a choreography approach the central controller service is replaced by the asynchronous flow of events over a message bus. These events are picked up by interested services and appropriate actions are taken which may be message enrichment, transformation, etc,. Since business process flows are now transformed to event and activity flows, business activity monitoring becomes critical. Compensation of failed transactions and exception actions should be incorporated into business logic. A user journey now becomes a series of async interactions. Additional work is needed to ensure that you can monitor and track that the right things have happened.
    The decomposition of a monolith to microservices is done by breaking the system into a shared kernel and identifying candidate bounded contexts using techniques like event storming and model-storming. Use an Anti-Corruption Layer to ring-fence clearly distinguishable bounded contexts from their neighbors to allow for extraction. Start with the core domain, move according to value. Transform the monolith using approaches such as Strangler, asset-capture, event-interception and reference-data. A transformation is not a refactoring it’s a debt restructuring program. Usual moves include Shared Kernel -> Customer-Supplier -> Open Host Services -> Published Language. References (ian-cooper, implementing-ddd)


    TL;DR Recommendations



    1. Where a comprehensive suite of ESB services already exist, follow a phased approach to migrating ESB composite and provider services. Business logic should reside in Java apps and only fundamental ESB functions like legacy adapters and pure transformation and mediation should be handled by the ESB. Phase 1 is short term, Phase 2 & 3 is medium term and Phase 4 should be the target for longer term transformation.
    2. Where existing ESB services do not already exist, start greenfield net new development with a pure microservices based approach aka jump to Phase 4. This will accelerate development to target state and generate a collateral of slimmed down domain based replaceable and reusable cloud native services. Support for horizontal concerns will be provided by Cloud Foundry. Integration concerns will be handled by implementing the app with Spring technologies(Spring Cloud Stream, Spring Cloud Data Flow, Spring Batch, Spring Reactor, …) and forward looking design patterns like CQRS, event sourcing and accumulate-only data stores.
    3. Create sample business process, and app code repositories as well as an internal knowledge base around technologies like Spring Integration, Spring Data, Spring Cloud and Spring Batch. As developers start writing apps on cloud native platforms they need to be equipped in understanding the tradeoffs in writing distributed systems at scale otherwise the result will be distributed balls of mud.


    Migrating apps to Cloud Foundry Deigo Runtime

    $
    0
    0
    Cloud Foundry has been rebased on a new runtime Deigo

    You can follow the steps below to migrate your app with least disruption from the existing DEA VMs to the Deigo cells

     (1) Deploy DEAs & Diego Side by Side when deploying the 1.6.x ERT.  
     (2) Set "Use Diego by default instead of DEAs” = true
     (3) Set "cf curl /v2/apps/APP_GUID -d '{"diego": true}' -X PUT “ for pre-exsiting DEA deployed apps
     (4) Test/Trigger app restages onto Diego as time permits

    If any of your apps use the VCAP_APP_PORT variable. Please refactor to simply use the PORT env var. 

    Please read this Pivotal Technote that provides details on the migration steps

    Cloud Nativeness of an Application

    $
    0
    0
    Questions to determine the cloud readiness of your  application to move to the cloud

    Q0 - Pick an appropriate path to the cloud:
        1. Java Buildpack
        2. TomEE Buildpack
        3. JBOSS Buildpack
        4. Liberty Buildpack
        5. WebLogic Buildpack
        6. Fat jar (Bundle app server with app)
        7. Docker





    Q1 - Does the application rely on persistent file system ? Do you expect the file system to persist across restarts.
         - Disk Caches
         - Transaction Logs
         - Harcoded File Paths
         - Any java.io.* use needs to be looked at

    Q2 - Does the app rely on non HTTP inbound protocol routing.
        - TCP support coming in the near future

    Q3 - Does the app take a long time to start or stop
        - any dependencies or semantic ordering between apps

    Q4 - Does the app use any native code libs  
        - Those libs need to be recompiled for the PCF Stemcell

    Q5 - Does the app require distributed XA 2-phase commit transaction ?

    Q6 - Is the app packaged as an ear/war/jar file ?
         - JBOSS buildpack and java buildpack do not support ear files

    Q7 - Configuration - How is the app configured.
         - Hardcoded property files
         - Generate one artifact configured based on the environment
         - Where is the config coming from

    Q8 - All logging to stdout/stderr
        - No log persistence
        - Use Console appenders

    Q9 - What do you use for monitoring today
      - Any agents installed on the OS wont work
      - Agents have to bundled with the app or the app server or the buildpack

    Q10 - Size of app should be ideally less than 250MB
        - More than that then you have issues with scaling and cf push

    Q11 - Favor external caching instead of in-jvm Caches
          - all caching should be done in a distributed cache outside the app JVM
          - in-JVM caches are transient
          - all state to be managed outside the app via backing services
            - punch firewall holes for all backing services

    Q12 - Libraries and Frameworks
        - Use cloud native / cloud friendly frameworks like Spring Cloud etc.?

    Q13 - Does your app need specialized hardware in terms of specific CPU/Memory/Disk resources
        - Dependency on specific JVMS/JDKs ?

    Q14 - No app server specific HA/Clustering or Administration will work in CF

    Q15 - OS based registry security will NOT work on the cloud
        - Rely on  3rd party trust protocols like OAuth or SAML or Authentication and Authorization ?

    Mendix loves Cloud Foundry

    $
    0
    0
    In June 2014 Mendix app platform (aPaas) married Cloud Foundry (PaaS). In case you are wondering if this announcement was just vaporware please rest your concerns. Mendix apps work on Cloud Foundry. Kaushik Bhattacharya has done all the legwork to flush this out. Read below ... 

    Mendix buildpack documentation describes the steps to push the application. The short version is:

    1> cf push mendix_app -b https://github.com/mendix/cf-mendix-buildpack -p <mendix_app.mda> --no-start

    2> Create and bind PCF MySQL service to the app in the usual way.

    3> Create and bind PCF RiakCS service to the app in the usual way.

    4> The app expects values for two variables ADMIN_PASSWORD and DATABASE_URL which can be set as follows:
    cf set-env my-app ADMIN_PASSWORD password
    cf set-env my-app DATABASE_URL ‘mysql://username:password@hostname:port/database’
    Note: You can copy-paste the database URL from the VCAP_SERVICES variable but all parameters such as “?reconnect=true” have to be removed.

    5> Restage/start app:
    cf restage/start my-app

    That's it! 

    Cloud Foundry in a Box

    $
    0
    0

    You may be wondering how do I hitch a ride on the PaaS train. You would like to spin up a Cloud Foundry on your local machine in less than 10 minutes ? Let me present to you Micro Cloud Foundry aka cloud in a box brought to you by the Cloud Foundry Foundation:

    MicroPCF is the simplest way to get a complete Cloud Foundry on a single machine
    You gotta verify this. So head on over to https://github.com/pivotal-cf/micropcf and spin up Cloud Foundry in a VM using vagrant. The Future of MicroPCF is bright. From the release notes

    In addition to the standard Cloud Foundry components, the MicroPCF distro will include the default PCF service brokers (Redis, MySQL, and RabbitMQ), as well as other PCF-specific applications. The first service available is the Redis service broker. This allows Redis instances to be created and bound to apps running on MicroPCF. More information can be found on Redis for Pivotal Cloud Foundry.
    WOA! Services and a full featured platform all in one VM.
    Viewing all 142 articles
    Browse latest View live