You are hereFeed aggregator / Sources / Dzone

Dzone


Syndicate content
Recent posts on DZone.com
Updated: 57 min 40 sec ago

Cover Your Apps While Still Using npm

Mon, 01/15/2018 - 6:01am

Every once in a while, the JavaScript and Node.js ecosystem experiences something that is deeply disturbing to many developers: an outage of the npm registry.

Whenever this happens, we hear outcries that npm is the single point of failure for the entire ecosystem and that the entire ecosystem is doomed because of this.

Categories: Technical

Database Resolutions for 2018

Mon, 01/15/2018 - 6:01am

The New Year brings about the perfect time to start fresh and set some new goals. So, in the spirit of things, we've laid out a few New Year's resolutions that all database professionals can take into account as they look to turn over a new leaf in 2018 and build on their successes.

Create Better, Real-Time Customer Experiences

In the summer of 2017, we partnered with research firm Vanson Bourne to conduct a study on the importance of real-time experiences for today's consumers. Those findings are detailed in our report, The Psychology of Waiting: The Business Impact of Diminishing Consumer Patience. What we found is that immediacy, accuracy, and relevancy are crucial and already expected in the eyes of consumers, and the results of not delivering on that expectation will cost brands their customer base.

Categories: Technical

Doing Multiple Searches in VS Code While Refactoring

Mon, 01/15/2018 - 6:01am

I spend a lot of my time refactoring code across a very large, legacy codebase at work. Often times, I'll do a search for something and work my way through the results over a period of days. Sometimes, something I see might lead me to do another search and a minor refactoring job which is part of the overall refactoring job. Hence, sometimes I end up with a "stack" of different searches which represents all the parts of the overall refactoring job. In each of these search results, it's important to not "lose my place" as I go down the list.

WebStorm/IntelliJ/PyCharm support this workflow really well:

Categories: Technical

The Primary Issue Affecting Performance Testing and Tuning

Mon, 01/15/2018 - 6:01am

To gather insights on the current and future state of performance testing and tuning, we talked to 14 executives involved in performance testing and tuning. We asked them,"What are the most common issues you see affecting performance testing and tuning?" Here's what they told us: 

Best Practices
  • Incorrect use of network bandwidth across applications. Inconsistent matches between the backend and front-end service APIs hurts responsiveness. Poor use of rendering. Awareness of what happens to an application in production from the perspective of the DevOps team. 
  • They do not have best practices in place for the fundamentals – ensure valid queries before production, not testing queries. Help the team establish and execute best practices to guarantee the high performance of the app. 
  • I’m a proponent of code maintenance, so while new feature development is great, it has to have a well-maintained platform. An application that has been developed for a few years will inevitably contain inefficient and unnecessary code. Performance testing might reveal bottlenecks that come up as a result of a no-longer-needed piece of code, or might drive the developers to think of other – more efficient – ways to implement certain parts of code. 
  • Inability to get an integrated view of the infrastructure and the application
  • Issues with DNS providers because all external communications rely on external DNS providers. How proxies may function with the geographic distribution of users and locations. 
  • They configure in a way that is not optimized. We identify the optimal configuration and train users how to use the application. This varies based on the technologies the customers use (i.e., the latest mobile push is faster than SMS). 
  • 1) Deployment environment management. Poor UX information conclusion because of environmental issues – wrong version of Chrome or incorrect version of the mobile device being tested. 2) Focus testing on the use cases that matter. When testing, start with the end user in mind. 
  • Static analysis when developing code so you check before reaching QA. Always look at memory and CPU since they are leading indicators of bad performance or not using resources as efficiently as possible. Check load balance – customers, where they’re from, peaks. There are good tools available to check traffic. 
  • The most common scenario is tuning the system for low latency instead of throughput. Of those, the most common is low latency CPU responses or low-latency storage with low-latency network being occasionally an issue. Best practices normally identify the main parameters that need tuning but usually, there is a bit of additional work to specify the parameters for a specific workload and on occasion, it's specific to the machine. A recurring topic is tuning the CPU Frequency management of the system. This tends to be straight-forward once it is identified that it is required. It's not something that can be disabled by default as power consumption may be too high.
  • Lack of preparation is a common issue. Thankfully with a cloud-based approach, the impact of this is reduced as customers can start and stop their test efforts on demand. During test execution, many customers have not anticipated or planned for the size of the test being conducted. This might mean that things like intrusion detection or DDOS prevention mechanisms are triggered, hampering the test. Or it may simply mean they exhaust available capacity in a quick succession of tests. These aren't necessarily bad outcomes, as they help explain system behavior, but we do see customers caught off by it.
  • Understanding the workload being generated through to the observable metrics in the system can also be an issue for less experienced teams. An over-reliance on single metrics or narrow aspect views of the system under test can compound these types of issues. The best success is enjoyed when one understands end-to-end system performance. More often than not, the black box left out of scope, for example, a load balancer, becomes the primary culprit in unexplained poor performance. A common issue we see is a single person or entity being nominated as the performance expert. Performance has such a wide impact these days that a multi-skilled team, or ability to engage with a wider team means you will generally achieve better outcomes from your testing and tuning.
Other
  • People are tired of alerts showing a problem when there really isn’t one. This leads to alert fatigue and increased the likelihood that an important alert may be missed. Being overwhelmed with alerts and feeling like they don’t provide value is problematic.

    Categories: Technical

Ambassador and Istio: Edge Proxy and Service Mesh

Mon, 01/15/2018 - 6:01am

Ambassador is a Kubernetes-native API Gateway for microservices. Ambassador is deployed at the edge of your network, and routes incoming traffic to your internal services (aka "north-south" traffic). Istio is a service mesh for microservices, and designed to add L7 observability, routing, and resilience to service-to-service traffic (aka "east-west" traffic). Both Istio and Ambassador are built using Envoy.

Ambassador and Istio can be deployed together on Kubernetes. In this configuration, incoming traffic from outside the cluster is first routed through Ambassador, which then routes the traffic to Istio. Ambassador handles authentication, edge routing, TLS termination, and other traditional edge functions.

Categories: Technical

The Difference Between Data Science, Machine Learning, and AI

Mon, 01/15/2018 - 6:01am

When I introduce myself as a data scientist, I often get questions like "What's the difference between that and machine learning?" or "Does that mean you work on artificial intelligence?" I've responded enough times that my answer easily qualifies for my "rule of three:"

The fields do have a great deal of overlap, and there's enough hype around each of them that the choice can feel like a matter of marketing. But they're not interchangeable. Most professionals in these fields have an intuitive understanding of how particular work could be classified as data science, machine learning, or artificial intelligence, even if it's difficult to put into words.

Categories: Technical

10 Steps to Cloud Happiness (Step 6): The Human Aspect

Mon, 01/15/2018 - 6:01am

Every journey starts at the beginning and this journey's no exception, a journey of 10 steps to cloud happiness

As previously presented in the introduction, it's possible to find cloud happiness through a journey focused on the storyline of digital transformation and the need to deliver applications into a cloud service.

Categories: Technical

Search Your Files With Grep and Regex

Mon, 01/15/2018 - 6:01am

How do you search through a file? On the surface, this might seem like sort of a silly question. But somewhere between the common-sense answer for many ("double click it and start reading!") and the heavily technical ("command line text grep regex") lies an interesting set of questions.

  • Where does this file reside?
  • What kind of file is it?
  • How big is the file?
  • What, exactly, are you looking for in the file?

Today, we're going to look at one of the most versatile ways to search a file: using grep and regex (short for regular expression). Using this combination of tools, you can search files of any sort and size. You can also search with extremely limited access to your environment, and if you get creative, you can find just about anything.

Categories: Technical

Spring, Reactor, and ElasticSearch: Benchmarking With Fake Test Data

Mon, 01/15/2018 - 4:01am

In the previous article, we created a simple adapter from ElasticSearch's API to Reactor's Mono, which looks like this:

import reactor.core.publisher.Mono; private Mono<IndexResponse> indexDoc(Doc doc) { //... }


Categories: Technical

Mobile Zone 2017: End-of-Year Special

Mon, 01/15/2018 - 2:01am

When you look back, you can see that we really had a cool year in the mobile development world. Kotlin became a hot topic, we were introduced to the new set of iPhones and iOS11, and our DZone contributors taught us so much about polishing our skills. We wouldn't be here without you, readers - let us take you on a little journey back through the highlights of 2017!

The Best of Mobile Development of DZone
  1. Full Stack Java, by Shai Almog. This year's most-read article in the mobile zone! Learn about getting Java right on Android, which is heavily based on XML.
  2. The Wireframe Tools You Should Know in 2017, by Olaotan Richard. Wireframing tools can be a huge timesaver. This overview of wireframing covers tools like Sketch, Framer, and Simulify, and discusses their benefits.
  3. This Week in Mobile: The Best of Modern Programming Languages, by James Sugrue. Learn a little about everything, from hacking your way to a UX job to the ten most interesting features in modern programming languages.
The Best of Android
  1. Java vs. Kotlin: First Impressions Using Kotlin for a Commercial Android Project, by Pedro Lima. This year, we saw Kotlin starting to be used instead of Java in Android development. This tutorial shows you how to get started, for simpler and cleaner code with less bloat.
  2. Android Clean Code: Part 1, by Mohanraj Karatadipalayam. This article kicks off a great series on examining your app's architecture and optimizing it to support unit testing. Be sure to read all five parts!
  3. 2017 in Mobile: DZone's Top 10 Android Libraries, by James Sugrue. For all you Android mobile developers, here are the best of the new libraries that were released for Android developers this year, brought to you by our very own Zone Leader, James.
The Best of iOS
  1. This Week in Mobile: WWDC 2017 and Refactoring Singletons, by James Sugrue. This week we got some news about WWDC, plus more detail on Swift 4. We also cover reducing your singleton usage in Swift and how to get more from Butter Knife.
  2. SOLID Principles Applied to Swift, by Marco Santarossa. Learn to apply the five SOLID principles to the Swift programming language for clean code and a reusable, maintainable component for mobile development.
  3. Object Oriented Programming in Swift, by Andrei Puni. In this article, you'll find a helpful tutorial for getting started in Swift, and using Swift to create apps and development projects.

You can get in on this action by contributing your own knowledge to DZone! Check out our new Bounty Board, where you can claim writing prompts to win prizes! 

Categories: Technical

DevOps Zone 2017: End-of-Year Special

Mon, 01/15/2018 - 2:01am

DevOps went through a ton of changes and growth over the last year. In this review, we hope to remind you of the most interesting events and refresh your memory on some of the most-read and most interesting articles from our contributors. Let's take a look back on the best of this important zone in 2017!

Smash Hits From the DevOps Zone
  1. Most Useful Linux Command Line Tricks, by Seco Max. Here it is -  your most popular article in the DevOps Zone this year! Brush up on the command line skills you've forgotten, and learn some you might not know.

    Categories: Technical

What's Preventing Big Data Success?

Mon, 01/15/2018 - 2:01am

To gather insights on the state of big data in 2018, we talked to 22 executives from 21 companies who are helping clients manage and optimize their data to drive business value. We asked them, "What are the most common issues you see preventing companies from realizing the benefits of big data?" Here's what they told us.

Legacy Technology
  • Depending on traditional systems. The people aspect is real, as expertise is needed to leverage big data systems. How to enable existing employees to use the data. Find a mix of people who can solve problems together. Have a desired end state but start small. Get wins. Stay focused. Have thoughtful, methodical implementation. 
  • Inability to deal with the shifting sands and technical debt of legacy systems and new software.
  • Willingness to embrace the cloud. Understand that there are multiple ways to approach. It is not feasible to keep supporting legacy enterprise systems. They are not able to scale with the influx of data.
  • Setting up the right backbone infrastructure (i.e. storage, transport, compute, failover). Getting data delivered from servers to analyze. How to deal with datasets. Scale, complexity,  modeling. 
  • As organizations try to build big data projects, they are often unable to successfully execute it because costs are constrained, they lack right talents, they omit to adopt agile procedures, and they want to reuse the existing infrastructure. As a result, the business initiatives that depend on the big data foundation are often implemented in regional or line of business silos, ultimately failing to deliver a return on investment (ROI) or taking much longer to achieve results. Initiatives are sometimes limited by ideas and resources or lengthy delays getting from new ideas to execution. Organizations also often fail to analyze big data due to its complexities, which is, in some cases, linked to the lack of data analysts and other IT professionals to help interpret data.
Lack of Knowledge
  • They don’t understand the cloud. They will do a “lift and shift” adopting Infrastructure as a Service without gaining any efficiency because they do not understand the benefits. They’ll kill their IT department and end up outsourcing management of the cloud to a third-party provider, still not understand the potential efficiency gains. More like Salesforce where they use the cloud for features, scalability, performance, and storage savings. Elastic cloud will scale up and down. You must use an SQL servid4r network and other components to scale instantly. Public cloud providers are now providing cognitive and AI/ML.
  • While everyone is excited about big data, there are still some common issues that prevent some companies from realizing its benefits (although these are getting less and less pressing):
    • A zoo of technologies that make it hard to choose which one to bet on.
    • Lack of technical talent.
    • Organizational roadblocks on adopting common data formats. Our recommendation for companies that are early with big data adoption is to pay attention to the new wave of technologies, especially data streaming technologies such as Apache Flink to avoid being left behind as a result of using already-outdated big data technologies that weren’t built for real-time applications. 
  • They like the promise of big data but do not understand specific use cases. There’s lack of buy-in by the different lines of business or specific business drivers. Lack of understanding of the best technology for the job, be it a data lake, platform, cloud, or software. It’s a complex decision and one that changes daily with all of the new solutions being introduced. It’s not a good idea to rebuild a data warehouse in a Hadoop data lake. Skillsets are less of an issue because of public cloud toolsets but you still need to understand the use case and the best tools to accomplish your goals. 
  • The customer might understand the potential benefit of big data based on what they see their competitors doing but they don’t know where to start. If they stick with the same tools, the same data sources, and the same knowledge, they’re not going to get anywhere. Tap into new talent and tools to solve the problem. I see a lot of cases where the project doesn’t go well and the company walks away from their big data initiative. Need someone on board who knows how to approach big data projects. The first and foremost challenge is fear of the unknown (usually expressed as fear of change), but there are a number of other challenges, including those that I mentioned before, such as ensuring that data analytics meets ethical requirements, regulatory and legal frameworks, and the ever-present challenge of acquiring, retaining, and growing the proper talent in the data science arena.
Business Problem Definition
  • Start with the application and the use case and work from there. You cannot treat data as an afterthought. Key to success is laying the groundwork for many applications. Pay attention to the underlying data store and data fabric. Look at the volume, variety, and velocity of a few solutions to solve for reality: mission-critical, multi-location. 
  • The proliferation of technologies and solutions in the space. Start with Hadoop and realize you need different storage and streaming which leads to Spark. The time spent configuring and managing open-source components in one place can hurt the ROI of your project. We recommend understanding what the best solution for the problem you want to solve is. Look for out-of-the-box solutions to reduce configuration and management time. 
  • Not understanding that big data analytics is a set of tools and technologies that must be selected and applied for measurable outcomes. For measurable outcomes, companies must apply enough rigor to the documentation and analysis of what they're trying to achieve. Companies must then base the selection of the tools and technologies on their capabilities to meet or exceed the desired outcomes. I’ve seen too much "download it, install it, use it," or "try without a defined purpose in mind." In technology, we don’t generally apply a cost to materials when tackling projects with the assumption computing power is available — unlike building a house, for example. However, we burn time on endeavors that are too often unfruitful or ill-fated due to little upfront planning.
Data Quality and Management
  • Ability to get their head around the data. Move data from storage to compute and back as needed.
  • Lack of focus on metadata — not looking at the problem holistically.
  • The systems being used to record data in the first place. No easy way to get data out for comparisons. Data silos by schema and implementation. Inconsistencies in systems and schemas. We normalize data across all systems and schemas.
  • One of the biggest challenges is their ability to use all the data that they have without a lot of manual and time-consuming processes to copy the data to where the analysis is happening. Moving data is very expensive and time-consuming.
  • Unorganized or unstructured data collection and processing. For NLG, in particular, the narrative output is often limited to the cleanliness of the data input. 
  • Inability to scale up concurrently in Hadoop. Query engine with single threading. Security’s ability to conform to GDPR. Process technology in place to delete records. Leave data in place — local administrators can know local laws. Prevent queries that may break the law. 
  • Slow, manual, one-off efforts that are discarded. Too much time spent finding data. No common authoritative set of data assets for everyone to use. Preparing and cleaning data takes weeks, leaving insufficient time for analytics. Data lakes become data swamps from data that is inaccurate, incomplete, and without context.
Other
  • Complexity in the technology stack. Retailers want real-time information from shopping carts and 12 months of purchasing history. Stitch three or four systems together. More moving parts result in more opportunities for breakage and latency. Help simplify the data pipeline for greater availability. Data architect the enterprise so that it’s able and ready to scale.

Here’s who we spoke to:

Categories: Technical

Database Sharding Explained in Plain English

Mon, 01/15/2018 - 2:01am

Sharding is one of those database topics that most developers have a distant understanding of, but the details aren't always perfectly clear unless you've implemented sharding yourself. In building the Citus database (our extension to Postgres that shards the underlying database), we've followed a lot of the same principles you'd follow if you were manually sharding Postgres yourself. The main difference, of course, is that with Citus, we've done the heavy lifting to shard Postgres and make it easy to adopt, whereas if you were to shard at the application layer, there's a good bit of work needed to re-architect your application.

I've found myself explaining how sharding works to many people over the past year and realized it would be useful (and maybe even interesting) to break it down in plain English.

Categories: Technical

API Life Cycle Basics: Clients

Mon, 01/15/2018 - 2:01am

I broke this area of my research into a separate stop a couple years back, as I saw several new types of service providers emerging to provide a new type of web-based API client. These new tools allowed you to consume, collaborate, and put APIs to use without writing any code. I knew that this shift was going to be significant, even though it hasn’t played out as I expected, with most of the providers disappearing, or being acquired, and leaving just a handful of solutions that we see today.

These new web API clients allow for authentication, and the ability to quickly copy and paste API URLs, or the importing of API definitions to begin making requests and seeing responses for targeted APIs. These clients were born out of earlier API explorers and interactive API documentation, but have matured into standalone services that are doing interesting things with how we consume APIs. Here are the three web API clients I recommend you consider as part of your API lifecycle.

Categories: Technical

Master Data Management: Answer to GDPR?

Mon, 01/15/2018 - 2:01am

A well-built Master Data Management (MDM) solution can solve many of the headaches a common enterprise faces. Specifically, such a solution gives the enterprise the visibility to their data and the sources of where it is stored and used, while keeping it current and relevant.

In this article, I want to look at this still poorly understood field in data management, and discuss how we can consider it as a fundamental step towards compliance with the EU’s upcoming General Data Protection Regulation (GDPR).

Here at Grakn Labs, we have worked with a number of companies in different industries, including financial services (banks, hedge funds, etc), to deliver these types of solutions, where we are asked, amongst other things, to help enterprises represent their master data into a Grakn knowledge base. In this article, I want to share some of the lessons we have learned along the way.

Categories: Technical

The Rise of the Data Fabric

Mon, 01/15/2018 - 2:01am

As enterprises and suppliers adopt cloud computing, edge computing has also become increasingly important. Smartphones, tablets, laptop computers, traditional PCs and, perhaps most importantly, IoT devices are now being pressed into service as part of major applications. This has created an imperative to deploy technologies designed to bring these devices “into the fold.”

This means technology designed to serve in the following ways must be deployed as well. This technology must make it easily possible to:

Categories: Technical

How to Build a Contextual Conversational Application

Mon, 01/15/2018 - 2:01am

A conversation is the exchange of ideas between two or more people. In other words, it's a series of questions and answers. The conversational app that you're building for your interaction also has two sides: the end-user can ask, "What happened today in the past?" and your bot will respond with an interesting fact from the fact and might also add, "Do you want me to send you a link to this article?" The end-user can then approve the offer or respond with something like, "No, please deliver it to my office." As we can see, a true contextual conversation isn't a simple Q&A. Each end user can use a different order or flow of information — and your app needs to handle all the different flows.

In a previous article, you built a "what happened today" conversational application that used Zapier to integrate with more than 750 web apps. In this tutorial, we will add contextual conversation to your application, Alexa skill, Google Home action, or chatbot.

Categories: Technical

Translating Phoenix Applications With Gettext

Mon, 01/15/2018 - 2:01am

Phoenix is a fast and reliable MVC framework written in the language Elixir (which, in turn, relies on Erlang). It has many features that should be familiar to developers who come from the Rails or Django world, but, at the same time, it may seem a bit complex at first due to Elixir's functional nature.

In this article, you will learn about Phoenix i18n. I'll walk you through how to add support for i18n in Phoenix applications with the help of Gettext (which is a default dependency). You will learn what Gettext is, what PO and POT files are, how to generate them and easily extract translations from your views. I will also talk about supporting multiple locales, pluralization rules, and domains. If you would like to run the code samples presented in this article locally, you'll need to install OTP (at least 18), Elixir (at least 1.4) and, of course, the Phoenix framework itself (version 1.3 will be used in this tutorial).

Categories: Technical

The Future of Containers and Microservices

Mon, 01/15/2018 - 2:01am

Microservices are emerging as the preferred way to create enterprise applications, bringing a wide range of advantages such as isolated risk and faster innovation, flexibility and agility.

As more enterprises are laying the foundations for more agile and services-based architectures, we’ve asked Petrica Martinescu, Lead Architect in Tremend, for his perspective on where does the industry go from here:

Categories: Technical

Converting Collections to Maps With JDK 8

Mon, 01/15/2018 - 1:01am

I have run into situations several times where it is desirable to store multiple objects in a Map instead of a Set or List because there are some advantages from using a Map of unique identifying information to the objects. Java 8 has made this translation easier than ever with streams and the Collectors.toMap(...) methods.

One situation in which it has been useful to use a Map instead of a Set is when working with objects that lack or have sketchy equals(Object) or hashCode() implementations, but do have a field that uniquely identifies the objects. In those cases, if I cannot add or fix the objects' underlying implementations, I can gain better uniqueness guarantees by using a Map of the uniquely identifying field of the class (key) to the class's instantiated object (value). Perhaps a more frequent scenario when I prefer Map to List or Set is when I need to lookup items in the collection by a specific uniquely identifying field. A map lookup on a uniquely identifying key is speedy and often much faster than depending on iteration and comparing each object with an invocation to the equals(Object) method.

Categories: Technical