Paradigm Shift: Virtual to the Cloud

We live in a world where communication solutions can be hardware-based, run in a virtual machine on a local server or be situated in the Cloud. The paradigm for communications solutions has been shifting from hardware to software to virtualization as I’ve discussed in my recent posts. Once a solution is virtual, in principle, customers have the flexibility to control their own destiny. They can run solutions on their own premises, in the Cloud, or with a hybrid model that uses both approaches.

Let’s consider an example. Dialogic has traced this type of evolution in its SBC products.  In 2013, the company positioned two products as SBCs. The BorderNet™ 2020 IMG provided both SBC and media gateway capabilities and found an audience that wanted an IP gateway between different networks or an enterprise edge device. The BorderNet™ 4000 was a new product which focused on SBC interconnect functions and ran on an internally-managed COTS platform. Five years later, both products have changed significantly.  The IMG 2020 continues to run its core functions on a purpose-built platform, but its management can be either virtual or web-based.  The BorderNet™ 4000 has morphed into a re-branded BorderNet™ SBC product offering. The product has evolved from its initial hardware focus to being a more portable software offering.  Customers can now run the software on a hardware server, in a choice of virtual machines or by deploying on the Amazon Web Services (AWS) cloud. Whereas the original BorderNet 4000 only supported signaling, the BorderNet SBC can optionally also support transcoding of media, either in hardware (using a COTS platform) or in software. The journey of these products has offered customers more choices. The original concepts of both products are still supported, but the products now have elements of virtualization which have enhanced their portability. So as a result, the full functionality of the BorderNet SBC can run in the Amazon cloud and in the other business models.

Once a product has been virtualized, it can be deployed numerous ways and can be deployed using a variety of business models. As customers want to move solutions to the Cloud, being able to run one or more instances of software in virtual machines is essential. The term Cloud tends to be used generically, but in telecom, there are multiple ways the evolution to the cloud is playing out. One example is the OpenStack movement, where open source has helped drive what is sometimes called the Public Cloud. The various forms of private clouds have also been popular, with variations being offered   by Amazon, Microsoft, Google, Oracle, IBM and others.

In my next post, we’ll consider how the technical changes we’ve been describing here have also been coupled with changes to business models.

If you participated in the evolution described here, please feel free to weigh in with your comments. If you’d like to explore strategies on how to evolve your application solutions or other communications products / services in this rapidly changing technical and business environment, you can reach me on LinkedIn.

Following the Path to Virtualization

A number of years back, my product team engaged with a Tier 1 solution provider. They wanted to use our IMG media gateway as part of their solution, but with a condition.  They had limited rack space, so they wanted to use an existing server to manage our device. Up until then, we required customers to load our element management system (EMS) software onto a dedicated Linux server.  Instead, our customer asked us to take our EMS software and package it to run on a virtual machine. Our team investigated and were able to port both the underlying Linux layer and the EMS application for use on a Xen virtual machine. Voila! Our software was now virtualized and our customer was able to re-use their existing server to manage the IMG gateway.

That was my introduction to virtualization, but this approach quickly became much more important.  Just a few months later, other customers asked us to port our EMS software to work within the VMWare virtual machine environment. There were immediate benefits. The EMS running directly on server hardware required qualification of new servers roughly every two years, an arduous process which became more difficult over time. By contrast, the virtual EMS (which we shortened to call the VEC), would run on a VMWare virtual machine and we were isolated from any server changes the customer might make. The VEC was also a software based product, so we offered it for much less than $1000 retail price vs. the $3000+ price point of a server based version.  Over the next several years, more and more customers moved to the virtualized version of software and the demand for the server version declined.

A couple of years ago, I was asked to take over a new software-based load balancer (LB) product developed by a Dialogic software team in the United Kingdom. The back story here had some similarities to my earlier experience. The team was working with a major customer who really liked their software-based media resource broker (MRB), but had issues with the LB product offered by a major market player. The team built the software load balancer so that it could run either directly on a server or on a virtual machine. When we launched the product for use by all customers, our Sales Engineering team loaded the software onto their laptops using a commonly available virtual software program and were immediately able to set up prototype sessions and adjust configurations via the software’s graphical user interface. So the LB software was virtualized from the beginning.  This was part of an overall trend within Dialogic, as more and more of the software-based components of products were converted for use in virtual environments.

In the early days, virtualization in telecom was mainly for software tools like user interfaces and configuration, but that is now changing in a major way. The LB product from Dialogic runs in a totally virtual mode, so that operations as diverse as configuration and balancing streams of protocols as diverse as HTTP and SIP all are supported, along with very robust security. In the telecom industry, virtualization is being used in several different ways as part of a sea change where the new approach to scalability involves building additional capability and resiliency by adding new instances of software. In turn, this drives the need for new types of orchestration software, which can manage operations in a world where the new paradigm requires creating, managing and deleting instances based on real time needs.

In my next post, I’ll talk about other ways that virtualization is being used as a key principle for building out telecom operations in a variety of Cloud environments. Virtualization is still a relatively young technological movement, but it has already helped spawn some surprising developments.

If you participated in the evolution described here, please feel free to weigh in with your comments. If you’d like to explore strategies on how to evolve your application solutions or other communications products in this rapidly changing business environment, you can reach me on LinkedIn.

 

Reshaping Enterprise Communications: A Tale of Two Companies

In my last few posts, I’ve described several factors which have encouraged communications solution providers to transition away from hardware and focus on software-based application solutions.

Let’s consider two companies and how they adjusted the path of their technical and business models to address these directions. Avaya is an example of a company whose solutions had a substantial amount of proprietary hardware around the time they split off from Lucent in the year 2000. Avaya had a leading market share in multiple markets targeted to enterprises, including PBXs, which provided telephone infrastructure for enterprises, and Call Centers, which used Avaya components to meet customer needs for highly scalable inbound and outbound communications. But the advent of IP-based technology and new protocols such as SIP began to change all of that. The mantra of IP-based communications was that voice was just another application that ran on an IP stack. This massive technical change was a major challenge for Avaya, since they’d built their business based on selling PBX and call center solutions based on their own hardware, but the cost of sustaining this business model was high. So starting around 2002, they executed a pivot to adjust to the new situation. First, they introduced new IP-based versions of their PBX technology ranging from IP phones to an IP-based PBX and a suite called IP Office for small to medium sized businesses. In parallel, they told potential partners that they wanted to move out of the hardware business and focus on value provided by their software. Third, they created a partner program, the Avaya DeveloperConnection program (later shortened to DevConnect), and encouraged partners to either build on or connect to Avaya solutions. As a result, Avaya was able to cultivate relationships with hardware appliance companies for products like media gateways and focus more on building out their application software. The DevConnect program also fit well with Avaya’s increased role as an integrator. Solutions for customers could be built using not only Avaya technology, but also DevConnect certified products. So Avaya had an approach to building out software-based solutions using IP, but they also had a large installed-base of hardware-based solutions, so they were not as nimble as some of their competitors.

The advent of SIP helped to encourage new market entrants into the communications software space. A prominent example was Microsoft. Starting around 2007, Microsoft introduced it’s new communication solution, Office Communication Server 2007 or OCS.  OCS used SIP as its backbone protocol and touted the ability for enterprises to eliminate the cost of a PBX and replace it by software running on Commercial Off the Shelf (COTS) servers. Enterprises still needed to connect to the telephone networks run by service providers, which were heavily based on circuit-switched technologies, so Microsoft started its own partner and certification program to qualify 3rd party products such as media gateways. Microsoft also had a lot of marketing muscle, since their applications such as Microsoft Office were widely used within enterprises, so they had a ready audience among the information technology managers at customers. In 2010, Microsoft -re-branded their offer and called it Microsoft Lync. Microsoft quickly became a big player in the new Unified Communications market and began to take market share away from traditional PBX vendors such as Avaya. Microsoft also continued to be aggressive in cultivating relationships with 3rd party hardware partners, who added support for Lync compatible IP phones and newer IP-based products such as Session Border Controllers (SBCs). Microsoft has since re-branded Lync to be Skype for Business, but the underlying technology and business model is an evolution of Lync.

The market battle for leadership in communications for enterprises continues, but the momentum has shifted heavily to software-based solutions and most hardware components are provided by other vendors. One exception to this direction is Cisco. They have maintained a strong presence in the hardware side of communications by virtue of their leading market position in routers and have incorporated additional functions such as media gateways and SBCs upon their routers. However, Cisco also has built their own software-based Unified Communications suites and Contact Center solutions, so they use the software-based applications model, but pair it up with Cisco network components to create their solutions.

In summary, the advent of SIP is one of several factors which have radically changed the landscape for communications solutions. In this post, we’ve considered how Avaya and Microsoft built their business strategies based on the strong move to IP-based software solutions over the last decade. In my next post, I’ll talk about another important technology development, virtualization, which is in the process of re-shaping how both application software and communications infrastructure products are being developed and brought to market today.

If you participated in the evolution described here, please feel free to weigh in with your comments. If you’d like to explore strategies on how to evolve your application solutions or other communications products, you can reach me on LinkedIn.

 

How IP Media Changed the Voice Business

This post is about a critical technical development in the history of Voice over IP which had a wide-reaching impact on the development of voice and related communications solutions. I’m referring to IP media, which was introduced early in the 2000s and has been ramping up every since.

In my last two posts, we discussed two important technologies which were instrumental in the early and middle years of voice-based solutions. The first post covered the introduction of voice boards and the second post reviewed the impact of media gateways on voice solutions.

On the business side, the introduction of media gateways provided a stepping stone which encouraged the pioneers of Voice over IP and other voice solution providers to decide where they offered the most strategic value to their customers.  In particular, should they focus on applications or upon enabling technology which could either complement applications or be used to build underlying infrastructure. The introduction of IP media pushed companies further down this decision path.

Two directions emerged. Several manufacturers of voice boards began to kick the tires on creating software-based versions of their voice boards. In the post on voice boards, we noted how control of voice and media functions was controlled using Application Program Interfaces (APIs) and that private APIs tied to particular vendor product families gained much more market traction than attempts at standards-based APIs. Hence, an early product in the space, Host Based Media Processing (HMP) from Dialogic®, offered the value proposition of being software-based, but was still controlled using the same set of APIs that were used with Dialogic voice boards. In parallel, another movement emerged. Two startup companies, Snowshore and Convedia, introduced a new category of product called the Media Server. In the last post, I mentioned how the Session Initiation Protocol (SIP) started to gain traction early in the 2000s. The Media Server concept took SIP as a starting point and added the ability to manipulate media by using a markup language, which was typically based on the  Extensible Markup Language (XML) recently standardized by the World Wide Web Consortium (W3C). The implications were profound, both on the technical and business sides, but like many new innovations, the transition to using this new approach took many years to develop. For example, by the time Media Servers truly hit the mainstream, the two originating companies had both been acquired by larger organizations who were able to make the needed capital investment to build sustainable businesses for media servers.

Of the two approaches, IP Media controlled by APIs essentially was an incremental development and IP Media managed by Media Servers introduced radical change. Let’s consider why this was the case. IP Media controlled by APIs retained the API-based model for control of media. For existing voice application developers, this was great.  They could start the transition away from including voice board hardware in their solutions and thus vastly simplify their go-to-market strategies. As a result, many voice application developers now raised the flag and said they were now out of the hardware business and their solutions were totally software-based. In reality, this typically meant their application software would run on industry standard Commercial Off the Shelf (COTS) PCs or servers using Intel or compatible CPUs such as those offered by AMD. But by using IP Media, the solution providers could skip the step of adding voice boards to their computer-based solutions and eliminate all of the complications of integrating 3rd party hardware. They did have to be careful to have enough CPU horsepower to run both their applications and IP media software, but it represented a major step forward. Voice and multi-media application solutions had now become a separate business in the Voice over IP market.

I mentioned that the introduction of the IP-based Media Server was a more radical step. So, I’ll review a few points to back up that assertion.

  1. The need to have a private API controlled by a single vendor went away.  The new concept of “the protocol is the API” replaced the programmatic approaches which had required developers to use programming languages such as C, C++ or Java for media operations. Instead, simple operations like playing back voice prompts or collecting digits could be accomplished using the combination of SIP and an XML-based markup language, thus eliminating the need for a programmatic language to carry out these operations.
  2. The application developer could focus clearly on making their applications best-of-breed and partner with media server vendors, who would focus on creating world-class voice and multimedia solutions.
  3. The application developers no longer needed to include media processing in their applications at all, thus reducing the CPU cycles needed for those media operations, However, the application developers did need to partner closely with the media server vendors and ensure their SIP + XML commands would work correctly when issued over an IP network to the paired media server.
  4. The concepts of the standalone application server and the standalone media server got included in the new IP Multimedia Subsystem (IMS) architecture, which was being standardized by the Third Generation Partnership Project (3GPP) as a linchpin for the next generation of mobile networks.

So the move toward IP Media was a major step forward for the Voice over IP industry and encouraged further market segmentation. For the first time, companies could specialize in applications and include the ability to support voice, tones and other multimedia, and do all of this in software which would run on industry standard COTS servers. In turn, hardware component and appliance vendors were able to focus on more distinct market segments where they could utilize either embedded solutions technology or start making the move toward running  media on COTS servers.

In my next post, I’ll talk more about how the business models for voice and unified communications solutions have evolved due to the more wide spread use of server and appliance based technology for applications, signaling and media.

Impact of Media Gateways on Voice Solutions

This is the latest in a series of posts on how voice development has been moving from hardware to software centered models. In my last post, we reviewed the classic approach to developing voice-centered solutions, which typically utilized voice boards. In this post, I’ll review how media gateways helped change the model.

In the classic voice model, the voice board often was used both for voice processing and to connect to a phone network, which might be either digital or analog. When Voice over IP (VoIP) began to emerge, new options became available for voice solutions. In the early days of VoIP, the H.323 stack was used to connect to IP networks, but the Session Initiation Protocol (SIP) got some crucial support in the 2000-2001 time frame from Microsoft and the Third Generation Partnership Program (3GPP), the leading standards organization for mobile phone networks. Within a few years, voice developers began to add SIP to their development capabilities. This had multiple implications.

Let’s look at some business side drivers. After the dot com crash and the related “Telecom Downturn,” which decimated the ranks of engineering staffs of the large vendors known as The Equipment Manufacturers (TEMs), these companies were looking for ways to reduce the amount of hardware in their solutions. In the classic voice solution, the voice board processed media and also connected to the circuit-switched networks. When SIP became popular, many of the TEMs started saying they wanted to move away from the hardware business. Some of these companies started processing media as part of their voice applications and others continued to rely upon voice boards for this processing.  In either case, if they outsourced the connection to the network to another box, they could reduce the number of hardware dependent elements in their solution and simplify the process of building and shipping their solutions.

Enter the Media Gateway. As the application developer included SIP in their solutions, they could connect to a media gateway via SIP and then let the media gateway take over the role of connecting to the existing circuit-switched network. This had been possible before SIP with H.323, but SIP offered much more flexibility for doing the complex call processing needed by the voice developers and continued to gain market momentum. In turn, various hardware companies started building purpose-built media gateway appliances to connect to digital or analog networks. The gateways supported the most common networks such as ISDN first, but eventually some gateways got more sophisticated and added Signaling System #7 (SS7) support as well.  This decomposition  of the voice solution offered benefits for both types of vendors. The solution vendors could start their move away from hardware and focus more on software, whereas the media gateway vendors were able to specialize in connections between SIP and the circuit-switched networks. Each type of company could specialize in their area of expertise and the solutions providers could add value to their solutions by buying best-of-breed media gateways.  Since the network protocols were standards-based,  the gateways needed to have robust standard protocol implementations and this helped create a competitive market for media gateways.

As a result, solution developers took another step along the path of reducing their dependency on embedded hardware, since they could now outsource the network connection to a media gateway.  In the next post, I’ll talk about developments in IP-based media which continued the evolution toward software-based voice applications.

If you participated in the evolution described here, please feel free to weigh in with your comments. If you’d like to explore strategies on how to evolve your company’s solutions to meet customer needs, you can reach me on LinkedIn.

Voice Development Models: A Journey Begins

During the past three years, I had product management responsibilities for products which covered the spectrum from hardware-centered to software-centered development.  In telecom, there’s been an evolution in development models as solution providers have taken a series of steps to gradually move away from hardware.  However, like many technical trends, there is a long tail as the older technology goes away only gradually.  In this post and others to follow, I’ll review models for voice applications at a high level and consider some steps along the way which have led to the software-oriented nirvana sought by many solution providers.

In the Nineties, voice development was often done with PCs at the center and embedded board hardware was an important component. The CPUs of the PCs ranged from models like the 386 on up to Pentium. Voice applications entailed lots of media processing, so voice boards with lots of Digital Signal Processors (DSPs) were critical to get scalable applications.  The DSPs did all of the heavy lifting for the media and the CPU of the PC was freed up to support the application side for solutions such as call centers, interactive voice retrieval and fax on demand.  Many of the applications developed during this time are still being used, though the actual PCs or servers may have been replaced and there may also have been some upgrades on the voice board hardware. Nonetheless, thousands of voice boards are still being sold to support these applications. On the software side, there were efforts to create industry standard Application Program Interfaces (APIs) such as S.100 from the Enterprise Computer Telephony Forum (ECTF) and T.611 from the International Telecommunications Union, but most of the boards were controlled using private APIs supplied by the board vendors.

In the model above, the boards and applications were all designed to work over the circuit-switched telephone network, which ranged from analog services (POTS or Plain Old Telephone Service) to digital approaches which began with the Integrated Systems Digital Network (ISDN) and continued with the Signaling System 7 (SS7) network overlay.  The phone companies worldwide assumed that these circuit-switched networks with Time-Division Multiplexing (TDM) and the related seven layer Open Systems Interconnect (OSI) models would be the focus going forward, replacing analog networks, and would perhaps be supplemented by new OSI stacks such as the Asynchronous Transport Method (ATM).

But a revolution had already begun as alternative flatter telecom stacks based on the upstart Internet Protocol  (IP) protocols were being used both for existing applications such as email and new applications like the Worldwide Web. In the telecom industry, a few companies began to explore running voice over IP networks, thus creating a new Voice over IP (VoIP) technical and business model for phone networks.  In the early days (from the late Nineties to the early 2000s), VoIP was mainly used to bypass existing long distance networks to reduce long distance charges, but the range of applications for IP soon began to expand.

At first, this looked like a great opportunity for the voice board manufacturers.  Now, they could add IP support to their boards or potentially just give software developers access to Ethernet ports on PC. An important new board category was created: the media gateway. These early media gateway boards allowed developers to use the existing circuit networks for most of their connections, but also tap into new IP networks where they existed.  Continuing on the same API trends, board vendors extended their private APIs to support IP in addition to TDM.  So now solution developers could run their solutions over both existing TDM and new IP networks, using these new hybrid boards which often could support voice, fax and tones.

In my next post, I’ll talk about how media gateways helped to kick off a new voice development model which accelerated the separation between software and hardware for voice and the new application category which became Unified Communications.

If you participated in the evolution described here, please feel free to weigh in with your comments.  If you’d like to explore strategies on how to evolve your solutions, you can reach me on LinkedIn.

Testing Product Proof of Concepts

Product managers are called upon to accomplish many tasks at different points in a product life cycle. One which can be important and potentially even a game changer is to develop a proof of concept for a product and then test the idea out.  I’ll provide an  example.

In one case, my division wanted us to explore a potential product concept for a hot business area — the Internet of Things (IoT).  The first challenge was to look at the market and see if there was’s a value proposition that made sense for the company.  My company was well known for being able to integrate hardware and software, so a product that built on that approach potentially offered both a technical and business fit.  Next came study of the market and examining the current players. In the IoT space, it’s somewhat crowded, but various companies like Intel offer starting points in the form of toolkits, white papers and architectures, and there are several industry organizations that also offer guidance to would-be participants. Based on those resources and other investigations, we developed a product “Proof of Concept” presentation.  It addressed the product concept, identified where our company added value and a contained a series of questions for potential partners / customers.

The next step was to test it out.  Our division put the word out to our sales team that we had a product concept we’d like to review with potential prospects.  When a couple of candidates were identified, we set up calls.  We shared the product concept in the form of a presentation and used it to discuss three main areas with the prospects:

1) What did they think of our concept?

2) What had their experiences been with similar products?

3) Could they share any pain points?

The results were interesting.  The prospects were intrigued by the concepts, but immediately began to compare them with existing products. That quickly led to the third phase, a discussion of pain points.  All of this discussion provided helpful clues on where the markets were being well served and where there might be openings for new products. By meeting with multiple prospects, we got diverse perspectives and also heard some common themes.  The testing confirmed the product concept had potential and could be the basis for further explorations and refinement if the company chose to take those next steps.

In summary, one way to test the viability of a product concept is to create a “proof of concept,” which may be as simple as a presentation or at the next level, a more complete working model. Then, it’s important to test the concept and how well it meets the needs of potential customers, before making the larger investments needed to bring a final version of the product to market.

Has your company seen similar challenges in considering new product offerings? What approaches were taken to test the new product concepts?  Please feel free to offer comments on your experiences. Or, reach out to me on LinkedIn at http://http://www.linkedin.com/in/james-rafferty-ma to discuss needs for similar projects.

 

 

 

 

Refining the Product Vision

One advantage I gained when I went independent for over a year was to consider how to be an effective Product Management leader with a fresh perspective. As a Product Line Manager, it’s very easy to get caught up in the product, the enhancement of various features and dig deep into the technical aspects. All of this is fine, but may overlook opportunities to go beyond the product itself and work with a broader team to build more success in your target markets.

In this post, I’ll discuss an example from my own experiences of how Product Management can be transformed to go beyond the product and create new success stories. In particular, let’s consider an example of refining the product vision and then creating additional marketing tools to support that.

In my last product management role at Dialogic, we had a media gateway product which was doing well in the marketplace, but most of the sales were for the traditional use cases of translating between circuit-based networks based on Time Division Multiplexing (TDM) and the newer SIP-based Voice over IP networks. When I did Google lookups for related search words, the presence of our product was much less than I would have expected based on our market performance of being #2 in the market for low density trunking gateways sold to service providers and related customers for several years running.

To address this, I worked with marketing colleagues to build an updated marketing plan. A core element of the plan was to look at use cases and create content which would explain why the use case was important and how the right kind of media gateways could help provide a solution.

For example, SIP Trunking has been a major driver for growth in the Voice over IP market for several years running and is usually tied to the sales of Session Border Controllers (SBCs). With SIP trunking, enterprises communicate with the outside world by connecting from their enterprise campuses to a service provider. Traditionally, service providers made this connection using ISDN trunks, which needed a fair amount of advance setup time to establish. Since SIP trunks run over IP and don’t require dedicated circuits, the time to deployment can be much faster and the price to the enterprise customers are reduced. As a result, the payback time for moving to SIP trunks can be  very fast. But… Yes, it always seems there is a but.

But, in order to make this change, the enterprise needs to either change their existing phone systems so that they are fully IP-based or establish a transition plan. In the latter case, the transition plan needs to enable them to use their existing TDM phone system infrastructure within the enterprise, but still connect to SIP trunks and gain savings in operating expense. This was consistent with industry data which showed that 40% of enterprises still had investments in TDM-based infrastructure. And it turns out that a media gateway which can manage that transition from TDM to SIP could be a valuable part of that strategy.  As a result, we created a white paper which talked about SIP trunking and why Media Gateways were an effective solution for the related TDM to IP use cases. In addition, we updated our marketing collateral on the web and in our product presentations to make sure this SIP Trunking use case was highlighted.

Within weeks after the new content was posted, we started getting much more visibility in our search rankings on the web and many prospects were downloading the new white paper. In turn, we were also hearing about related business opportunities which aligned closely with this refined product vision. We also highlighted the revised strategies in a webinar.

This was just one example of how we expanded the product vision and re-focused the sales team on a broader set of opportunities for this product.

In summary, in this post we reviewed an example of expanding the product vision to highlight an important high growth use case and then implementing related marketing content and tactics to reinforce the vision.

If you’d like to continue the conversation, please leave a comment. If you’d like to explore how similar approaches might benefit your company’s product strategy, you can reach me on LinkedIn .

 

 

 

 

 

Going Independent — Again

After a challenging but rewarding three year stint in product management at Dialogic, I am now independent again early in 2018. Four years back, I ran my consulting business for a year and gained some additional training before re-joining Dialogic. In this post, I’ll talk about some new and different approaches I took in my role with the company during the past three years that produced positive results.

  1. Using Agile to Manage and Change Priorities – In 2014, I took a course in SCRUM at Quality and Productivity Solutions and got certified by SCRUMStudy as a SCRUM Product Owner.  At Dialogic, I wore many hats and had frequent changes in priorities. By creating SCRUM Epics and Stories, I updated my priorities weekly and was able to make fast changes when needed in reaction to market changes, new projects or other internal factors.
  2. Building and Managing Teams – In earlier Product Management roles, I mostly focused on the product in areas such as managing the roadmap, setting pricing and training Sales. During the past three years, I reached out to the other departments and convened cross functional meetings about once every two weeks. In other words, I managed the products as programs.  This way, our departments worked together to drive success for our products and the results were very positive for both startup products and more mature product lines. For example, we identified customer pain points and then the team created solutions to deal with them.
  3. Going Virtual – My products varied over the three years, but included a mix of hardware and software, or were purely software.  A trend which cut across several of the product lines was the need to run the software on Virtual Machines, notably in the VMware and Kernel-based Virtual Machine (KVM) environments. For example, by running in a virtual environment, customers got to use their own choice of servers for routine management tasks. For our virtual load balancer product, Dialogic® Powerville™ LB, we took it a step further and could run all of the software on VMware or other virtual environments and included sophisticated features such as built-in redundancy.
  4. Marketing via Effective Content Management – In the past year, I worked with the Dialogic marketing team to devise a marketing plan for the Dialogic IMG 2020 Integrated Media Gateway and revise our content management to help drive more leads.  We wrote several new white papers on important use cases such as SIP Trunking, Transcoding and SS7 to SIP interworking.  We also promoted recent design wins and market leadership via press releases and conducted webinars which tied into all of these marketing themes. The net result was to bring more attention to these products, improve our SEO rankings for related product searches and reinforce our position as a market leader in the low density trunking media gateway market.

These four approaches are examples of ways we were able to innovate.  They enabled me to both be a product-focused individual contributor and lead broader team efforts that produced lasting results. If you’ve had similar needs or experiences, I’d love to hear your feedback.

I’ll write more about my recent experiences in Product Management, Marketing and Communications Technology, plus thoughts on the year ahead within upcoming posts.

 

 

 

Faxed: A Book Review – Ruminations

In my last post, I talked about the book written by historian and professor Jonathan Coopersmith entitled Faxed – The Rise and Fall of the Fax Machine.  I left off as fax entered the late Eighties and became wildly popular.  As Coopersmith recounts, this confounded the experts, who were expecting electronic messaging, videotext or a variety of other technologies to supersede fax.

In my own work life, I’d worked for a fax company for a decade by then, but didn’t get close to the technology until I joined Product Line Management and wrote the business case for a fax modem product.  Like many companies, Fujitsu sold fax machines, but we also started developing computer-based products.  Around 1989, we released the dexNet 200 fax modem and accompanying software called PC 210 which ran on IBM compatible computers.  A year later, my boss sent me to the TR-29 fax standards committee and I discovered that this group was writing standards that would have a big impact on most companies in the wild west activity known as computer fax.  I also joined an upstart industry group, which became the International Computer Fax Association (ICFA) and started reporting to them on the standards being developed at TR-29.  Fax was hot, but Fujitsu was focused on its mainframe computer business and shut down the US-based business called Fujitsu Imaging Systems of America (FISA) that employed me.  After a month of soul searching, I decided to start a consulting business called Human Communications which advised clients on fax and related technologies.  The ICFA was one of my first clients and I continued to attend TR-29 and gradually built up a client list among fax machine and computer fax companies.

IMG_1010 - Faxed - cropped

By late 1993, the business was going well and that’s when I originally met Jonathan Coopersmith. In his book, he talks about this period as being the heyday of fax. Fax did extremely well in the US, as pizza parlors installed fax machines and offices of every size had one. But it became even more popular in Japan. The Japanese fax manufacturers competed fiercely, but also cooperated to ensure interworking between their machines.  I started attending the meetings of ITU-T Study Group 8 in starting around this time and we were building the new extensions to the very popular Group 3 fax standard.  There was a newer digital standard called Group IV, but Group 3 took on its best attributes and basically shut Group IV out of the market.

In the mid-Nineties, the Internet and the World Wide Web exploded and began a massive transformation in the way the world communicated.  In the fax community, it was obvious to many of us that the Internet would have a huge impact, so we started a very aggressive effort to write Fax over IP standards.  Dave Crocker, a co-author of the standard for electronic mail in the Internet Engineering Task Force (IETF), came to TR-29 and asked for volunteers to begin the work of Internet Fax in the IETF.  A similar effort began in the ITU.  The work proceeded from ground zero to completed standards by 1998, which was unusually fast for standards groups.

I left the fax consulting business in late 1999 and joined the Voice over IP industry.  By then, there were already signs that fax would lose its dominance. The World Wide Web totally took over the information access role that had been played by Fax on Demand.  The chip companies stopped focusing on fax and by the time a new version of the T.38 standard was written in 2004 to accommodate the faster V.34 modem speeds for fax over IP, the VoIP chips didn’t support it.

In Japan, as Coopersmith explains, fax had been even more dominant than in the US.  The visual aspects of Japanese characters such as kanji meant that computer keyboards were much slower to develop in Japan than in the US market.  By the time I met Jonathan again in 2004, fax had begun its next move and had become more of a niche business both in the US and in Japan.  It still sells well in some market segments and there has been a bit of a renaissance as the T.38 fax standard has kicked in to accompany Voice over IP, but the arc of technological history is now in the long tail phase for fax.

Fax is a classic example of a technology that had many false starts — the first half of the book shows just how many there were — but eventually caught fire as all of the pieces came together for massive success. This book offers some good context on all of this and has many useful lessons learned for technologists. Great technology is never enough by itself, but when the right combination of market needs and technology come together, amazing things can happen. Faxed, the book, tells that story for fax.