RSS

Serverless API News

These are the news items I've curated in my monitoring of the API space that are related to serverless and APIs and thought worth enough to include in my research. I'm using all of these links to better understand how APIs are being deployed across a diverse range of implementations.

A Simple API With AWS DynamoDB, Lambda, and API Gateway

I’ve setup a few Lambda scripts from time to time, but haven’t had any dedicated project time to push forward API serverless concepts. Over the weekend I had a chance to deploy a couple of APIs using AWS DynamoDB, Lambda, and API Gateway, lighting up some of the serverless API possibilities in my brain. Like most areas of the tech sector, I think the term is dumb, and there is too much hype, but I think underneath there is some interesting possibilities, at least enough to keep me playing around with things.

Right now my primary API setup is Amazon Aurora (MysQL) backend, with API deployed on EC2, using Slim API framework in PHP. It is clean, simple, and gets the job done. I use 3Scale, or Github for the API management layer. This new approach simplifies some things for me, but definitely goes further down the AWS rabbit hole with the adoption of API Gateway and Lamdba, but also introduces some interesting enough benefits, that has me considering for use on some specific projects.

Identity and Access Management (IAM) Role The first thing you need to do to make the whole AWS thing work is setup a role using AWS IAM. I created a role just for this project, and added CloudWatchFullAccess, AmazonDynamoDBFullAccess, and AWSLambdaDynamoDBExecutionRole. I need this role to handle a bunch of management level things with the database, and logging. IAM is one of the missing aspects of hand crafting my APIs, and is why I am considering adopting on behalf of my customers, to help them get a handle on security.

Simple API Database Backends Using AWS DynamoDB I am a big fan of relational databases, mostly out of habit and experience. A client of mine is fluent in AWS DynamoDB, which is a simple NoSQL solution, so I felt compelled to ensure the backend database for their APIs spoke DynamoDB. It’s a pretty simple database, so I got to work creating an account table, and added a simple JSON object that contained 4 or 5 fields, and fired up an index for the simple accounts database. The databases I’m creating are meant to track aspects of API management, so the tables won’t end up being too large, or have high performance requirements, regardless, DyanamoDB is a perfect backend for APIs, leaving me unsure why I don’t use the platform more often.

Using Lambda Functions Behind The API Instead of firing up an Amazon EC2 and hand crafting my API framework, I crafted a handful of serverless scripts in Node.js that will run as independent Lambda functions. I’m going to eventually need a whole bunch of functions, but to get me going with this new API I crafted four separate Lambda functions that I can use to drive the API:

  • searchAccounts - Using the DynamoDB API scan method to query the table.
  • addAccount - Using the DynamoDB API putItem method to add a record to the table.
  • updateAccount - Using the DynamoDB API udpateItem method to add a record to the table.
  • deleteAccount - Using the DynamoDB API deleteItem method to add a record to the table.

Using the AWS SDK, I’m simply making calls to the DynamoDB API to get all the work done. I’m fluent in JavaScript, but not well versed in using Node.js, but it doesn’t take much energy to understand what is going on. The serverless functions are pretty utilitarian, and all that is unique is the DynamoDB method to call, and the JSON that is being sent with each call. It is something that is pretty straightforward, and easily replicated for other APIs. I will keep developing functions for my API, but now I can at least handle the basic CRUD functionality around my new database.

Publish An API Using AWS API Gateway The last piece of the puzzle for this story, is the API. Each Lambda function accepts and returns JSON, which is technically an API, but there is no management layer, or RESTful infrastructure present. The AWS API Gateway gives me the ability to craft API paths, with accompanying GET, POST, PUT, DELETE, and other methods. For each method I add, I’m given four options for connecting to my backend, either via HTTP call, create just a mock API, leverage other AWS service, or connect to a Lambda function. I quickly wire up a GET, POST, PUT, and DELETE to each of my functions, and add my API to an AWS API Gateway plan, requiring API keys, and limiting who can access what.

I now have an accounts API which allows me to add, update, delete, and search for accounts using an API. My data is stored in DynamoDB, and served up via Lambda functions, through the API Gateway. It is secured. It is scalable. I can easily quantify what my database, functions, and gateway resource usage and costs will end up being. I get why folks are interested in serverless. It’s clean. It’s modular. It scales. It is very manageable. I don’t feel like it will be the answer for every API I need to deploy, but it does make sense for quickly deploying APIs for customers who are open to AWS, and need things to be secure, highly performant, and scalable.

A serverless approach definitely takes the sysadmin load off a little bit. Especially when you depend on DynamoDB for the backend. DynamoDB, Lambda, and API Gateway offer a pretty nice stack that can auto tune and scale itself. I’m going to fire up five separate APIs using this new approach, and setup some monitoring and testing to see how it delivers, and maybe get a handle on the costs associated with operating an API like this. I still need to attach a custom domain, get a handle on logging with AWS CloudWatch, and some of the other aspects of API management using AWS API Gateway. However, it provides me with a nice look into the serverless world, and how I can use it to deploy, and manage APIs, but also use APIs to manage a serverless approach by publishing functions using the Lambda API, keeping things in tune with my API definitions stored on Github.


Azure Matching AWS When It Comes To Serverless Storytelling

I consume a huge amount of blog and Twitter feeds each week. I evaluate the stories published by major tech blogs, cloud providers, and individual API providers. In my work there is a significant amount of duplicity in stories, mostly because of press release regurgitation, but one area I watch closely is the volume of stories coming out of major cloud computing providers around specific topics that are relevant to APIs. One of these topics I’m watching closely is the new area of serverless, and what type of stories each providers are putting out there.

Amazon has long held the front runner position because AWS Lambda was the first major cloud provider to do serverless, coining the term, and dominating the conversation with their brand of API evangelism. However, in the last couple months I have to say that Microsoft is matching AWS when it comes to the storytelling coming out of Azure in the area of serverless and function as a service (FaaS). Amazon definitely has an organic lead in the conversation, but when it comes to the shear volume, and regular drumbeat of serverless stories Microsoft is keeping pace. After watching several months of sustained storytelling, it looks like they could even pass up Amazon in the near future.

When you are down in the weeds you tend to not see how narratives spread across the space, and the power of this type of storytelling, but from my vantage point, it is how all the stories we tell at the ground level get seeded, and become reality. It isn’t something you can do overnight, and very few organizations have the resources, and staying power to make this type of storytelling a sustainable thing. I know that many startups and enterprise groups simply see this as content creation and syndication, but that is the quickest way to make your operations unsustainable. Nobody enjoys operating a content farm, and if nobody cares about the content when it is being made, then nobody will care about the content when it is syndicated and consumed–this is why I tell stories, and you should to.

Stories are how all of this works. It is stories that developers tell within their circles that influence what tools they will adopt. It is stories at the VC level that determine which industries, trends, and startups they’ll invest in. Think about the now infamous Jeff Bezos mandate, which has been elevated to mythical status, and contributed to much of the cloud adoption we have seen to date. It is this kind of storytelling that will determine each winner of the current and future battles between cloud giants. Whether it is serverless, devops, microservices, machine learning, artificial intelligence, internet of things, and any other scifi, API-driven topic we can come up with in the coming years. I have to admit, it is interesting to see Microsoft do so well in the area of storytelling after many years of sucking at it.


Reducing Developers To A Transaction With APIs, Microservices, Serverless, Devops, and the Blockchain

A topic that keeps coming up in discussions with my partner in crime Audrey Watters (@audreywatters) about our podcast is around the future of labor in an API world. I have not written anything about this, which means I’m still in early stages of any research into this area, but it has come up in conversation, and reflected regularly in my monitoring of the API space, I need to begin working through my ideas in this area. A process that helps me better see what is coming down the API pipes, and fill the gaps in what I do not know.

Audrey has long joked about my API world using a simple phrase: “reducing everything to a transaction”. She says it mostly in jest, but other times I feel like she wields it as the Cassandra she channels. I actually bring up the phrase more than she does, because it is something I regularly find myself working in the service of as the API Evangelist. By taking a pro API stance I am actively working to reduce legacy business, institutional, and government processes down and breaking them down into a variety of individual tasks, or if you see things through a commercial lens, transactions.

Microservices A microservices philosophy is all about breaking down monoliths into small bite size chunks, so they can be transacted independently, scaled, evolved, and deprecated in isolation. Microservices should do one thing, and do it well (no backtalk). Microservices should do what it does as efficiently as possible, with as few dependencies as possible. Microservices are self-contained, self-sufficient, and have everything they need to get the job done under a single definition of a service (a real John Wayne of compute). And of course, everything has an API. Microservices aren’t just about decoupling the technology, they are are about decoupling the business, and the politics of doing business within SMB, SME, enterprises, institutions, and government agencies–the philosophy for reducing everything to a transaction.

Containers A microservice way of thinking about software that is born in the clouds, a bi-product of virtualization and API-ization of IT resources like storage and compute. In the last decade, as IT services moved from the basement of companies into the cloud, a new approach to delivering the compute, storage, and scalability needed to drive this new microservices way of doing business emerged that was called containers. In 2017 businesses are being containerized. The enterprise monolith is being reduced down to small transactions, putting the technology, business, and politics of each business transaction into a single container, for more efficient development, deployment, scaling, and management. Containers are the vehicle moving the microservices philosophy forward–the virtualized embodiment of reducing everything to a transaction.

Serverless Alongside a microservice way of life, driven by containerization, is another technological trend (undertow) called serverless. With the entire IT backend being virtualized in the cloud, the notion of the server is disappearing, lightening the load for developers in their quest for containerizing everything, turning the business landscape into microservices, than can be distilled down to a single, simple, executable, scalable function. Serverless is the codified conveyor belt of transactions rolling by each worker on the factory floor. Each slot on a containerized, serverless, microservices factory floor possessing a single script or function, allowing each transaction to be executed, and replicated allowing it to be applied over and over, scaled, and fixed as needed. Serverless is the big metal stamping station along a multidimensional digital factory assembly line.

DevOps Living in microservices land, with everything neatly in containers, being assembled, developed, and wrenched on by developers, you are increasingly given more (or less) control over the conveyor belt that rolls by you on the factory floor. As a transaction developer you are given the ability to change direction of your conveyor belt, speed things up, apply one or many metal stamp templates, and orchestrate as much, or as little of the transaction supply chain as you can keep up with (meritocracy 5.3.4). Some transaction developers will be closer to the title of architect, understanding larger portions of the transaction supply chain, while most will be specialized, applying one or a handful of transaction templates, with no training or awareness of the bigger picture, simply pulling the Devops knobs and levers within their reach.

Blockchain Another trend (undertow) that has been building for sometime, that I have managed to ignore as much as I can (until recently) is the blockchain. Blockchain and the emergence of API driven smart contracts has brought the technology front and center for me, making it something i can ignore, as I see signs that each API transaction will soon be put in the blockchain. The blockchain appears to becoming the decentralized (ha!) and encrypted manifestation of what many of us has been calling the API contract for years. I am seeing movements from all the major cloud providers, and lesser known API providers to ensure that all transactions are put into the blockchain, providing a record of everything that flows through API pipes, and has been decoupled, containerized, rendered as serverless, and available for devops orchestration.

Ignorance of Labor I am not an expert in labor, unions, and markets. Hell, I still haven’t even finished my Marx and Engels Reader. But, I know enough to be able to see that us developers are fucking ourselves right now. Our quest to reduce everything to a transaction, decouple all the things, and containerize and render them serverless makes us the perfect tool(s) for some pretty dark working conditions. Sure, some of us will have the bigger picture, and make a decent living being architects. The rest of us will become digital assembly line workers, stamping, maintaining a handful of services that do one thing and do it well. We will be completely unaware of dependencies, or how things are orchestrated, barely able to stay afloat, pay the bills, leaving us thankful for any transactions sent our way.

Think of this frontline in terms of Amazon Mechanical Turk + API + Microservices + Containers + Serverless + Blockhain. There is a reason young developers make for good soldiers on this front line. Lack of awareness of history. Lack of awareness of labor. Makes great digital factory floor workers, stamping transactions for reuse elsewhere in the digital assembly line process. This model will fit well with current Silicon Valley culture. There will still be enough opportunity in this environment for architects and cybersecurity theater conductors to make money, exploit, and generate wealth. Without the defense of unions, government or institutions, us developers will find ourselves reduced to transactions, stamping out other transactions on the digital assembly line floor.

I know you think your savvy. I used to think this too. Then after having the rug pulled out from under me, and the game changed around me by business partners, investors, and other actors who were playing a game I’m not familiar with, I have become more critical. You can look around the landscape right now and see numerous ways in which power has set its sights on the web, and completely distorting any notion of the web being democratic, open, inclusive, or safe environment. Why do us developers think it will be any different wit us? Oh yeah, privilege.


Being First With Any Technology Trend Is Hard

I first wrote about Iron.io back in 2012. The are an API-first company, and they were the first serverless platform. I’ve known the team since they first reached out back in 2011, and I consider them one of my poster children for why there is more to all of this than just the technology. Iron.io gets the technology side of API deployment, and they saw the need for enabling developers to go serverless, running small scalable scripts in the cloud, and offloading the backend worries to someone who knows what they are doing.

Iron.io is what I’d consider to be a pretty balanced startup, slowly growing, and taking sensible amounts of funding they needed to grow their business. The primary area I would say that Iron.io has fallen short is when it comes to storytelling about what they are up to, and generally playing the role of a shiny startup everyone should pay attention to. They are great storytellers, but unfortunately the frequency and amplification of their stories has fallen short, allowing other strong players to fill the void–opening the door for Amazon to take the lion share of the conversation when it comes to serverless. Demonstrating that you can rock the technology side of things, but if you don’t also rock the storytelling and more theatrical side of things, there is a good chance you can come in second.

Storytelling is key to all of this. I always love the folks who push back on me saying that nobody cares about these stories, the markets only care about successful strong companies–when it reality, IT IS ALL ABOUT STORYTELLING! Amazon’s platform machine is good at storytelling. Not just their serverless group, but the entire platform. They blog, tweet, publish press releases, whisper in reporter ears, buy entire newspapers, publish science fiction patents, conduct road shows, and flagship conferences. Each AWS platform team can tap into this, participate, and benefit from the momentum, helping them dominate the conversation around their particular technical niche.

Being first with any technology trend will always be hard, but it will be even harder if you do not consistently tell stories about what you are doing, and what those who are using your platform are doing with it. Iron.io has been rocking it for five years now, and are continuing to define what serverless is all about, they just need to turn up the volume a little bit, and keep doing what they are doing. I’ll own a portion of this story, as I probably didn’t do my share to tell more stories about what they are up to, which would have helped amplify their work over the years–something I’m working to correct with a little storytelling here on API Evangelist.


Bringing The API Deployment Landscape Into Focus

I am finally getting the time to invest more into the rest of my API industry guides, which involves deep dives into core areas of my research like API definitions, design, and now deployment. The outline for my API deployment research has begun to come into focus and looks like it will rival my API management research in size.

With this release, I am looking to help onboard some of my less technical readers with API deployment. Not the technical details, but the big picture, so I wanted to start with some simple questions, to help prime the discussion around API development.

  • Where? - Where are APIs being deployed. On-premise, and in the clouds. Traditional website hosting, and even containerized and serverless API deployment.
  • How? - What technologies are being used to deploy APIs? From using spreadsheets, document and file stores, or the central database. Also thinking smaller with microservices, containes, and serverless.
  • Who? - Who will be doing the deployment? Of course, IT and developers groups will be leading the charge, but increasingly business users are leveraging new solutions to play a significant role in how APIs are deployed.

The Role Of API Definitions While not every deployment will be auto-generated using an API definition like OpenAPI, API definitions are increasingly playing a lead role as the contract that doesn’t just deploy an API, but sets the stage for API documentation, testing, monitoring, and a number of other stops along the API lifecycle. I want to make sure to point out in my API deployment research that API definitions aren’t just overlapping with deploying APIs, they are essential to connect API deployments with the rest of the API lifecycle.

Using Open Source Frameworks Early on in this research guide I am focusing on the most common way for developers to deploy an API, using an open source API framework. This is how I deploy my APIs, and there are an increasing number of open source API frameworks available out there, in a variety of programming languages. In this round I am taking the time to highlight at least six separate frameworks in the top programming languages where I am seeing sustained deployment of APIs using a framework. I don’t take a stance on any single API framework, but I do keep an eye on which ones are still active, and enjoying usag bey developers.

Deployment In The Cloud After frameworks, I am making sure to highlight some of the leading approaches to deploying APIs in the cloud, going beyond just a server and framework, and leveraging the next generation of API deployment service providers. I want to make sure that both developers and business users know that there are a growing number of service providers who are willing to assist with deployment, and with some of them, no coding is even necessary. While I still like hand-rolling my APIs using my peferred framework, when it comes to some simpler, more utility APIs, I prefer offloading the heavy lifting to a cloud service, and save me the time getting my hands dirty.

Essential Ingredients for Deployment Whether in the cloud, on-premise, or even on device and even the network, there are some essential ingredients to deploying APIs. In my API deployment guide I wanted to make sure and spend some time focusing on the essential ingredients every API provider will have to think about.

-Compute - The base ingredient for any API, providing the compute under the hood. Whether its baremetal, cloud instances, or serverless, you will need a consistent compute strategy to deploy APIs at any scale. -Storage - Next, I want to make sure my readers are thinking about a comprehensive storage strategy that spans all API operations, and hopefully multiple locations and providers. -DNS - Then I spend some time focusing on the frontline of API deployment–DNS. In todays online environment DNS is more than just addressing for APIs, it is also security. -Encryption - I also make sure encryption is baked in to all API deployment by default in both transit, and storage.

Some Of The Motivations Behind Deploying APIs In previous API deployment guides I usually just listed the services, tools, and other resources I had been aggregating as part of my monitoring of the API space. Slowly I have begun to organize these into a variety of buckets that help speak to many of the motivations I encounter when it comes to deploying APIs. While not a perfect way to look at API deployment, it helps me thinking about the many reasons people are deploying APIs, and craft a narrative, and provide a guide for others to follow, that is potentially aligned with their own motivations.

  • Geographic - Thinking about the increasing pressure to deploy APIs in specific geographic regions, leveraging the expansion of the leading cloud providers.
  • Virtualization - Considering the fact that not all APIs are meant for production and there is a lot to be learned when it comes to mocking and virtualizing APIs.
  • Data - Looking at the simplest of Create, Read, Update, and Delete (CRUD) APIs, and how data is being made more accessible by deploying APIs.
  • Database - Also looking at how APIs are beign deployed from relational, noSQL, and other data sources–providing the most common way for APIs to be deployed.
  • Spreadsheet - I wanted to make sure and not overlook the ability to deploy APIs directly from a spreadsheet making APIs are within reach of business users.
  • Search - Looking at how document and content stores are being indexed and made searchable, browsable, and accessible using APIs.
  • Scraping - Another often overlooked way of deploying an API, from the scraped content of other sites–an approach that is alive and well.
  • Proxy - Evolving beyond early gateways, using a proxy is still a valid way to deploy an API from existing services.
  • Rogue - I also wanted to think more about some of the rogue API deployments I’ve seen out there, where passionate developers reverse engineer mobile apps to deploy a rogue API.
  • Microservices - Microservices has provided an interesting motivation for deploying APIs–one that potentially can provide small, very useful and focused API deployments.
  • Containers - One of the evolutions in compute that has helped drive the microservices conversation is the containerization of everything, something that compliments the world of APis very well.
  • Serverless - Augmenting the microservices and container conversation, serverless is motivating many to think differently about how APIs are being deployed.
  • Real Time - Thinking briefly about real time approaches to APIs, something I will be expanding on in future releases, and thinking more about HTTP/2 and evented approaches to API deployment.
  • Devices - Considering how APis are beign deployed on device, when it comes to Internet of Things, industrial deployments, as well as even at the network level.
  • Marketplaces - Thinking about the role API marketplaces like Mashape (now RapidAPI) play in the decision to deploy APIs, and how other cloud providers like AWS, Google, and Azure will play in this discussion.
  • Webhooks - Thinking of API deployment as a two way street. Adding webhooks into the discussion and making sure we are thinking about how webhooks can alleviate the load on APIs, and push data and content to external locations.
  • Orchestration - Considering the impact of continous integration and deployment on API deploy specifically, and looking at it through the lens of the API lifecycle.

I feel like API deployment is still all over the place. The mandate for API management was much better articulated by API service providers like Mashery, 3Scale, and Apigee. Nobody has taken the lead when it came to API deployment. Service providers like DreamFactory and Restlet have kicked ass when it comes to not just API management, but making sure API deployment was also part of the puzzle. Newer API service providers like Tyk are also pusing the envelope, but I still don’t have the number of API deployment providers I’d like, when it comes to referring my readers. It isn’t a coincidence that DreamFactory, Restlet, and Tyk are API Evangelist partners, it is because they have the services I want to be able to recommend to my readers.

This is the first time I have felt like my API deployment research has been in any sort of focus. I carved this layer of my research of my API management research some years ago, but I really couldn’t articulate it very well beyond just open source frameworks, and the emerging cloud service providers. After I publish this edition of my API deployment guide I’m going to spend some time in the 17 areas of my research listed above. All these areas are heavily focused on API deployment, but I also think they are all worth looking at individually, so that I can better understand where they also intersect with other areas like management, testing, monitoring, security, and other stops along the API lifecycle.


Serverless Blueprints For Your API

Serverless is spreading across the API sector, and is something that leading API providers are beginning to embrace as part of their operations. I saw an interesting example of this out of AWS and Box lately, with the announcement of Lambda blueprints and code for integrating with the Box API via the AWS platform.

The Box serverless blueprints show you how to call the Box APIs and connect a Box webhook to a Lambda function via the Amazon API Gateway–providing some pretty interesting use cases for using Box via serverless functions:

They are some pretty basic use cases, but it is the approach that opens up an entirely new door for API integration for me–Serverless Development Kits (SDK). Every API providers should have a whole catalog of open source serverless scripts that are deployable to Lambda, and other serverless platforms. Of course, there are one-click buttons deploy each individual script to the cloud platform of your choice.

I’m diving into the other side of this story for me, where Box is embracing a tighter coupling with the AWS platform as part of their operations. I am looking at how Box is providing a copy of the Box API for deployment on AWS. This all reflects how I see things working in the future, where you can deploy individual API integration scripts, as well as deploy APIs to a serverless environment like this–empowering anyone to become both API consumer and provider via the AWS, or any other cloud ecosystem.


Extending Your Apps Using Embeddable Serverless Webhooks

Auth0 has released a pretty interesting way to extend your web applications using what is an embeddable, serverless, webhooks environment–for lack of a better description. It’s a pretty interesting way to extend applications in a scrappy, hackable, scriptable, webhooky kind of way. The extensions are definitely not for non-developers, but provide a kind of scriptable view source that any brave user could use to get some interesting things done within an existing web application interface.

Here are some of the selling features of Auth0 extensions:

  • They are deployed outside of your product and managed externally.
  • They run securely and in isolation from your SaaS application. The SaaS will not go down due to a faulty Webhook.
  • They are generally easy for a developer to create, whether it’s your own engineers, customers, or partners.
  • They can be authored in a number of programming languages.
  • They can use whatever third-party dependencies they need.

I think it is an interesting approach to extending existing applications using Webhooks. I’m guessing some users might be intimidated by it, but I could see it be something that developers and tech savvy users could hack together some pretty interesting implementations. Then when you start saving these interesting scripts, making them available to power users via a catalog–I could see some useful things emerge. I remember several jobs I’ve had that had some sort of universal SQL text area within a system, allowing power users to craft and reuse useful SQL scripts–this seems like a similar approach, but for the API age.

I’m curious to see where this kind of solution goes. It is a quick way to extend SaaS functionality, allowing users to get more from an application without expensive developer cycles, and offloading the compute to external services. I think it is a creative convergence of what I see as embeddable, serverless, and webhooks–all part of an effective API strategy. I’m hoping it injects some creativity and extensibility into existing apps, allowing them to better serve the long tail of users needs in an API serverless webhook way.


API Providers Localizing Compute For Developers Using Serverless

Twilio launched their Twilio Function this last week, localizing serverless infrastructure for Twilio API consumers, when it comes to powering key functionality that Twilio brings to the table. This seems like a logical move for mature API providers, keeping in tune with shifts in how developers are integrating with APIs, and deploying their applications in a DevOps, continuous integration world.

I could see other API providers following Twilio’s lead, jumping on the serverless bandwagon, and localizing compute within their API ecosystems. I can see this approach converging with other movements in the SDK space where service providers like APIMATIC are enabling the continuous deployment of SDKs, samples, and other scripts for API integration. Allowing developers to quickly deploy integration scripts, in the programming language of choice–all baked into their existing API platform developer arrangement.

It makes sense that some of these common approaches that are emerging across the API space like containerization, webhooks, serverless, evented and other real-time technologies make their way to being baked in, or at least augmenting existing API operations. I don’t think that every API provider should be following Twilio’s lead in every area, but they do provide a pretty interesting example consider when we think about where the API space might be headed–I find the most mature API providers are just as important to keep an eye on as much as each wave of startups.

I’ll keep an eye on serverless being localized like this with other API providers. It seems like an opportunity for some provider, to develop a white label solution to help API providers deliver scripting, events, webhooks, and other emerging ways to orchestrate and integrate with APIs like Twilio is doing.


Serverless Approaches To Deploying Code Will Help Unwind Some Of The Technical Debt We Have

I am sure there is some equation we could come up to describe the amount of ideology and / or dogma present alongside each bit and byte of code. Something that exponentially increases with each additional line of code or MB on disk. An example of this in action, in the wilds of the API space, is the difference between an SDK for an API, and just a single sample API call. 

The single API sample is the minimum viable artifact that enables you to get value from an API -- allowing you to make a single API request and receive a single API response. Very little ideology, or dogma present (its there, but just smaller quantities). Now, if an API provider provides you with a Laravel SDK in PHP, or a JAX-RS SDK in Java, and React.js SDK, I'm significantly cranking up the volume on ideology and dogma involved with this code. All contributing what type of technical debt I'm willing to assume along the way, with each of one my API integrations, and wider technological solutions.

I work hard to never hold up any single technology as an absolute solution, as there are none, but I can see a potential for the latest wave of "serverless" approaches to delivering code potentially helping us unwind some of our technical debt. Like most other areas of technology, simply choosing to go "serverless" will not provide you the relief you need, but if you are willing to do the hard work to decouple your existing code, and apply the philosophy consistently to future projects, the chances "serverless" might pay dividends in the reduction of your technical debt will increase greatly.


APIs Needs To Augment My World With A Tangible Benefit In Order To Achieve Relevance

I am spending time talking to more API providers, and API service providers, about the challenges they are facing, while reaching out to potential customers, thanks to the support of my partners Cloud Elements. One of the conversation I had last week was with Diego Oppenheimer (@doppenhe) of Algorithmia (@algorithmia), who shared with me the challenges he faces in getting senior engineers to realize the potential of APIs, and the  value API driven platforms like Algorithmia bring to the table. 

Diego expressed that the biggest thing they face is convincing their engineer, senior dev, and other tech-focused consumers, that Algorithmia isn't just something new they need to add to their existing stack, and that it is more about enabling what is already in place. While some folks will benefit from discovering entirely new algorithmic approaches on Algorithmia's marketplace, the biggest impact will come from the platform's approach to defining, scaling, and stabilizing the algorithms developers and IT folks are already putting to work. 

These are the content, data, and other resources you are already putting to work, the algorithms in your business life that already have relevance in your operations. I'm constantly working to focus on the fact that APIs are all about making these resources better defined, accessible and more discoverable, but when you also leverage what's being called "serverless" approaches like Algorithmia, you are also making them more scalable, more stable, and usable as well.

Diego said he's always trying to reassure senior tech folks of the fact that they aren't pointing out that they don't have the skills needed to define, deploy, and scale the bits of code (algorithms) that are making all of our worlds go around. It is about employing APIs, the cloud, and making your existing algorithms more agile, flexible, and scalable, augmenting your existing world with tangible benefits--ultimately making you better at what you are already doing.

I've talked about this concept before within my own operations. As the API Evangelist I will not scale what I do, unless I can find a service that augments what I already do, justifying an added costs only by truly achieving relevance in my daily operations. Little API driven algorithmic nuggets is how I do this. All you have to do as an APi service provider and enabler, is convince me of the tangible benefit you deliver in my operations, and your products, services, and tooling will naturally become more relevant.


Four Buckets To Organize My API Deployment Research Into

I was being interviewed by an IBM group the other day, and I scribbled some thoughts on a piece of paper as I was rambling, which I just picked up trying to make sense of what was going through my mind, before I archive the chicken scratches. 

It look like during the call I was talking about how I see the world of API deployment, based upon how I am currently organizing providers, services, and tooling that I find. I was discussing with them how I am moving towards breaking things down into four buckets:

  • Gateway - The more enterprise focused, IT department led API efforts, usually conducted at larger enterprises, and institutions.
  • Artisan - A more farm to table, hand crafted approach that employs organic open source REST frameworks in the process.
  • Cloud - Leverage the latest breed of cloud API service providers that allow you to deploy APIs from common resources online.
  • Serverless - Outdoing the artisan hipster approach, all the cool kids are doing it without servers, piece by piece using Iron.io and Lambda.

My API deployment research has just been a single list of service providers, and open source tooling for the last couple of years. Its time I started breaking things down a little more, and helping my readers find solutions based upon a more realistic approach to how APis are being deployed in the wild, or at least within their organizations.

I almost added database as layer here, but I want to keep these API deployment buckets more about the middle layer in between the backend infrastructure, and the clients that wll be consuming API resources. API deployment touches on API design and definitions, and stops short of API management, with some overlap with containers, and API virtualization.

This post also reveals the fact that I write most of these stories to help me think through the world of APis, get my thoughts in order, and help formalize them a little bit further, as I try to articulate to myself, (and you) via the this blog.


If you think there is a link I should have listed here feel free to tweet it at me, or submit as a Github issue. Even though I do this full time, I'm still a one person show, and I miss quite a bit, and depend on my network to help me know what is going on.