爆款云主机2核4G限时秒杀,88元/年起!
查看详情

活动

天翼云最新优惠活动,涵盖免费试用,产品折扣等,助您降本增效!
热门活动
  • 618智算钜惠季 爆款云主机2核4G限时秒杀,88元/年起!
  • 免费体验DeepSeek,上天翼云息壤 NEW 新老用户均可免费体验2500万Tokens,限时两周
  • 云上钜惠 HOT 爆款云主机全场特惠,更有万元锦鲤券等你来领!
  • 算力套餐 HOT 让算力触手可及
  • 天翼云脑AOne NEW 连接、保护、办公,All-in-One!
  • 中小企业应用上云专场 产品组合下单即享折上9折起,助力企业快速上云
  • 息壤高校钜惠活动 NEW 天翼云息壤杯高校AI大赛,数款产品享受线上订购超值特惠
  • 天翼云电脑专场 HOT 移动办公新选择,爆款4核8G畅享1年3.5折起,快来抢购!
  • 天翼云奖励推广计划 加入成为云推官,推荐新用户注册下单得现金奖励
免费活动
  • 免费试用中心 HOT 多款云产品免费试用,快来开启云上之旅
  • 天翼云用户体验官 NEW 您的洞察,重塑科技边界

智算服务

打造统一的产品能力,实现算网调度、训练推理、技术架构、资源管理一体化智算服务
智算云(DeepSeek专区)
科研助手
  • 算力商城
  • 应用商城
  • 开发机
  • 并行计算
算力互联调度平台
  • 应用市场
  • 算力市场
  • 算力调度推荐
一站式智算服务平台
  • 模型广场
  • 体验中心
  • 服务接入
智算一体机
  • 智算一体机
大模型
  • DeepSeek-R1-昇腾版(671B)
  • DeepSeek-R1-英伟达版(671B)
  • DeepSeek-V3-昇腾版(671B)
  • DeepSeek-R1-Distill-Llama-70B
  • DeepSeek-R1-Distill-Qwen-32B
  • Qwen2-72B-Instruct
  • StableDiffusion-V2.1
  • TeleChat-12B

应用商城

天翼云精选行业优秀合作伙伴及千余款商品,提供一站式云上应用服务
进入甄选商城进入云市场创新解决方案
办公协同
  • WPS云文档
  • 安全邮箱
  • EMM手机管家
  • 智能商业平台
财务管理
  • 工资条
  • 税务风控云
企业应用
  • 翼信息化运维服务
  • 翼视频云归档解决方案
工业能源
  • 智慧工厂_生产流程管理解决方案
  • 智慧工地
建站工具
  • SSL证书
  • 新域名服务
网络工具
  • 翼云加速
灾备迁移
  • 云管家2.0
  • 翼备份
资源管理
  • 全栈混合云敏捷版(软件)
  • 全栈混合云敏捷版(一体机)
行业应用
  • 翼电子教室
  • 翼智慧显示一体化解决方案

合作伙伴

天翼云携手合作伙伴,共创云上生态,合作共赢
天翼云生态合作中心
  • 天翼云生态合作中心
天翼云渠道合作伙伴
  • 天翼云代理渠道合作伙伴
天翼云服务合作伙伴
  • 天翼云集成商交付能力认证
天翼云应用合作伙伴
  • 天翼云云市场合作伙伴
  • 天翼云甄选商城合作伙伴
天翼云技术合作伙伴
  • 天翼云OpenAPI中心
  • 天翼云EasyCoding平台
天翼云培训认证
  • 天翼云学堂
  • 天翼云市场商学院
天翼云合作计划
  • 云汇计划
天翼云东升计划
  • 适配中心
  • 东升计划
  • 适配互认证

开发者

开发者相关功能入口汇聚
技术社区
  • 专栏文章
  • 互动问答
  • 技术视频
资源与工具
  • OpenAPI中心
开放能力
  • EasyCoding敏捷开发平台
培训与认证
  • 天翼云学堂
  • 天翼云认证
魔乐社区
  • 魔乐社区

支持与服务

为您提供全方位支持与服务,全流程技术保障,助您轻松上云,安全无忧
文档与工具
  • 文档中心
  • 新手上云
  • 自助服务
  • OpenAPI中心
定价
  • 价格计算器
  • 定价策略
基础服务
  • 售前咨询
  • 在线支持
  • 在线支持
  • 工单服务
  • 建议与反馈
  • 用户体验官
  • 服务保障
  • 客户公告
  • 会员中心
增值服务
  • 红心服务
  • 首保服务
  • 客户支持计划
  • 专家技术服务
  • 备案管家

了解天翼云

天翼云秉承央企使命,致力于成为数字经济主力军,投身科技强国伟大事业,为用户提供安全、普惠云服务
品牌介绍
  • 关于天翼云
  • 智算云
  • 天翼云4.0
  • 新闻资讯
  • 天翼云APP
基础设施
  • 全球基础设施
  • 信任中心
最佳实践
  • 精选案例
  • 超级探访
  • 云杂志
  • 分析师和白皮书
  • 天翼云·创新直播间
市场活动
  • 2025智能云生态大会
  • 2024智算云生态大会
  • 2023云生态大会
  • 2022云生态大会
  • 天翼云中国行
天翼云
  • 活动
  • 智算服务
  • 产品
  • 解决方案
  • 应用商城
  • 合作伙伴
  • 开发者
  • 支持与服务
  • 了解天翼云
      • 文档
      • 控制中心
      • 备案
      • 管理中心

      Production best practices: performance and reliability

      首页 知识中心 其他 文章详情页

      Production best practices: performance and reliability

      2024-05-14 08:53:33 阅读次数:181

      app

      Production best practices: performance and reliability

      Overview

      This article discusses performance and reliability best practices for Express applications deployed to production.

      This topic clearly falls into the “devops” world, spanning both traditional development and operations. Accordingly, the information is divided into two parts:

      • Things to do in your code (the dev part):
        • Use gzip compression
        • Don’t use synchronous functions
        • Do logging correctly
        • Handle exceptions properly
      • Things to do in your environment / setup (the ops part):
        • Set NODE_ENV to “production”
        • Ensure your app automatically restarts
        • Run your app in a cluster
        • Cache request results
        • Use a load balancer
        • Use a reverse proxy

      Things to do in your code

      Here are some things you can do in your code to improve your application’s performance:

      • Use gzip compression
      • Don’t use synchronous functions
      • Do logging correctly
      • Handle exceptions properly

      Use gzip compression

      Gzip compressing can greatly decrease the size of the response body and hence increase the speed of a web app. Use the compression middleware for gzip compression in your Express app. For example:

      var compression = require('compression')
      var express = require('express')
      var app = express()
      app.use(compression())
      

      For a high-traffic website in production, the best way to put compression in place is to implement it at a reverse proxy level (see Use a reverse proxy). In that case, you do not need to use compression middleware. For details on enabling gzip compression in Nginx, see Module ngx_http_gzip_module in the Nginx documentation.

      Don’t use synchronous functions

      Synchronous functions and methods tie up the executing process until they return. A single call to a synchronous function might return in a few microseconds or milliseconds, however in high-traffic websites, these calls add up and reduce the performance of the app. Avoid their use in production.

      Although Node and many modules provide synchronous and asynchronous versions of their functions, always use the asynchronous version in production. The only time when a synchronous function can be justified is upon initial startup.

      If you are using Node.js 4.0+ or io.js 2.1.0+, you can use the --trace-sync-io command-line flag to print a warning and a stack trace whenever your application uses a synchronous API. Of course, you wouldn’t want to use this in production, but rather to ensure that your code is ready for production. See the node command-line options documentation for more information.

      Do logging correctly

      In general, there are two reasons for logging from your app: For debugging and for logging app activity (essentially, everything else). Using console.log() or console.error() to print log messages to the terminal is common practice in development. But these functions are synchronous when the destination is a terminal or a file, so they are not suitable for production, unless you pipe the output to another program.

      For debugging

      If you’re logging for purposes of debugging, then instead of using console.log(), use a special debugging module like debug. This module enables you to use the DEBUG environment variable to control what debug messages are sent to console.error(), if any. To keep your app purely asynchronous, you’d still want to pipe console.error() to another program. But then, you’re not really going to debug in production, are you?

      For app activity

      If you’re logging app activity (for example, tracking traffic or API calls), instead of using console.log(), use a logging library like Winston or Bunyan. For a detailed comparison of these two libraries, see the StrongLoop blog post Comparing Winston and Bunyan Node.js Logging.

      Handle exceptions properly

      Node apps crash when they encounter an uncaught exception. Not handling exceptions and taking appropriate actions will make your Express app crash and go offline. If you follow the advice in Ensure your app automatically restarts below, then your app will recover from a crash. Fortunately, Express apps typically have a short startup time. Nevertheless, you want to avoid crashing in the first place, and to do that, you need to handle exceptions properly.

      To ensure you handle all exceptions, use the following techniques:

      • Use try-catch
      • Use promises

      Before diving into these topics, you should have a basic understanding of Node/Express error handling: using error-first callbacks, and propagating errors in middleware. Node uses an “error-first callback” convention for returning errors from asynchronous functions, where the first parameter to the callback function is the error object, followed by result data in succeeding parameters. To indicate no error, pass null as the first parameter. The callback function must correspondingly follow the error-first callback convention to meaningfully handle the error. And in Express, the best practice is to use the next() function to propagate errors through the middleware chain.

      For more on the fundamentals of error handling, see:

      • Error Handling in Node.js
      • Building Robust Node Applications: Error Handling (StrongLoop blog)

      What not to do

      One thing you should not do is to listen for the uncaughtException event, emitted when an exception bubbles all the way back to the event loop. Adding an event listener for uncaughtException will change the default behavior of the process that is encountering an exception; the process will continue to run despite the exception. This might sound like a good way of preventing your app from crashing, but continuing to run the app after an uncaught exception is a dangerous practice and is not recommended, because the state of the process becomes unreliable and unpredictable.

      Additionally, using uncaughtException is officially recognized as crude. So listening for uncaughtException is just a bad idea. This is why we recommend things like multiple processes and supervisors: crashing and restarting is often the most reliable way to recover from an error.

      We also don’t recommend using domains. It generally doesn’t solve the problem and is a deprecated module.

      Use try-catch

      Try-catch is a JavaScript language construct that you can use to catch exceptions in synchronous code. Use try-catch, for example, to handle JSON parsing errors as shown below.

      Use a tool such as JSHint or JSLint to help you find implicit exceptions like reference errors on undefined variables.

      Here is an example of using try-catch to handle a potential process-crashing exception. This middleware function accepts a query field parameter named “params” that is a JSON object.

      app.get('/search', function (req, res) {
        // Simulating async operation
        setImmediate(function () {
          var jsonStr = req.query.params
          try {
            var jsonObj = JSON.parse(jsonStr)
            res.send('Success')
          } catch (e) {
            res.status(400).send('Invalid JSON string')
          }
        })
      })
      

      However, try-catch works only for synchronous code. Because the Node platform is primarily asynchronous (particularly in a production environment), try-catch won’t catch a lot of exceptions.

      Use promises

      Promises will handle any exceptions (both explicit and implicit) in asynchronous code blocks that use then(). Just add .catch(next) to the end of promise chains. For example:

      app.get('/', function (req, res, next) {
        // do some sync stuff
        queryDb()
          .then(function (data) {
            // handle data
            return makeCsv(data)
          })
          .then(function (csv) {
            // handle csv
          })
          .catch(next)
      })
      
      app.use(function (err, req, res, next) {
        // handle error
      })
      

      Now all errors asynchronous and synchronous get propagated to the error middleware.

      However, there are two caveats:

      1. All your asynchronous code must return promises (except emitters). If a particular library does not return promises, convert the base object by using a helper function like Bluebird.promisifyAll().
      2. Event emitters (like streams) can still cause uncaught exceptions. So make sure you are handling the error event properly; for example:
      const wrap = fn => (...args) => fn(...args).catch(args[2])
      
      app.get('/', wrap(async (req, res, next) => {
        const company = await getCompanyById(req.query.id)
        const stream = getLogoStreamById(company.id)
        stream.on('error', next).pipe(res)
      }))
      

      The wrap() function is a wrapper that catches rejected promises and calls next() with the error as the first argument. For details, see Asynchronous Error Handling in Express with Promises, Generators and ES7.

      For more information about error-handling by using promises, see Promises in Node.js with Q – An Alternative to Callbacks.

      Things to do in your environment / setup

      Here are some things you can do in your system environment to improve your app’s performance:

      • Set NODE_ENV to “production”
      • Ensure your app automatically restarts
      • Run your app in a cluster
      • Cache request results
      • Use a load balancer
      • Use a reverse proxy

      Set NODE_ENV to “production”

      The NODE_ENV environment variable specifies the environment in which an application is running (usually, development or production). One of the simplest things you can do to improve performance is to set NODE_ENV to “production.”

      Setting NODE_ENV to “production” makes Express:

      • Cache view templates.
      • Cache CSS files generated from CSS extensions.
      • Generate less verbose error messages.

      Tests indicate that just doing this can improve app performance by a factor of three!

      If you need to write environment-specific code, you can check the value of NODE_ENV with process.env.NODE_ENV. Be aware that checking the value of any environment variable incurs a performance penalty, and so should be done sparingly.

      In development, you typically set environment variables in your interactive shell, for example by using export or your .bash_profile file. But in general you shouldn’t do that on a production server; instead, use your OS’s init system (systemd or Upstart). The next section provides more details about using your init system in general, but setting NODE_ENV is so important for performance (and easy to do), that it’s highlighted here.

      With Upstart, use the env keyword in your job file. For example:

      # /etc/init/env.conf
       env NODE_ENV=production
      

      For more information, see the Upstart Intro, Cookbook and Best Practices.

      With systemd, use the Environment directive in your unit file. For example:

      # /etc/systemd/system/myservice.service
      Environment=NODE_ENV=production
      

      For more information, see Using Environment Variables In systemd Units.

      Ensure your app automatically restarts

      In production, you don’t want your application to be offline, ever. This means you need to make sure it restarts both if the app crashes and if the server itself crashes. Although you hope that neither of those events occurs, realistically you must account for both eventualities by:

      • Using a process manager to restart the app (and Node) when it crashes.
      • Using the init system provided by your OS to restart the process manager when the OS crashes. It’s also possible to use the init system without a process manager.

      Node applications crash if they encounter an uncaught exception. The foremost thing you need to do is to ensure your app is well-tested and handles all exceptions (see handle exceptions properly for details). But as a fail-safe, put a mechanism in place to ensure that if and when your app crashes, it will automatically restart.

      Use a process manager

      In development, you started your app simply from the command line with node server.js or something similar. But doing this in production is a recipe for disaster. If the app crashes, it will be offline until you restart it. To ensure your app restarts if it crashes, use a process manager. A process manager is a “container” for applications that facilitates deployment, provides high availability, and enables you to manage the application at runtime.

      In addition to restarting your app when it crashes, a process manager can enable you to:

      • Gain insights into runtime performance and resource consumption.
      • Modify settings dynamically to improve performance.
      • Control clustering (StrongLoop PM and pm2).

      The most popular process managers for Node are as follows:

      • StrongLoop Process Manager
      • PM2
      • Forever

      For a feature-by-feature comparison of the three process managers, see

      Using any of these process managers will suffice to keep your application up, even if it does crash from time to time.

      However, StrongLoop PM has lots of features that specifically target production deployment. You can use it and the related StrongLoop tools to:

      • Build and package your app locally, then deploy it securely to your production system.
      • Automatically restart your app if it crashes for any reason.
      • Manage your clusters remotely.
      • View CPU profiles and heap snapshots to optimize performance and diagnose memory leaks.
      • View performance metrics for your application.
      • Easily scale to multiple hosts with integrated control for Nginx load balancer.

      As explained below, when you install StrongLoop PM as an operating system service using your init system, it will automatically restart when the system restarts. Thus, it will keep your application processes and clusters alive forever.

      Use an init system

      The next layer of reliability is to ensure that your app restarts when the server restarts. Systems can still go down for a variety of reasons. To ensure that your app restarts if the server crashes, use the init system built into your OS. The two main init systems in use today are systemd and Upstart.

      There are two ways to use init systems with your Express app:

      • Run your app in a process manager, and install the process manager as a service with the init system. The process manager will restart your app when the app crashes, and the init system will restart the process manager when the OS restarts. This is the recommended approach.
      • Run your app (and Node) directly with the init system. This is somewhat simpler, but you don’t get the additional advantages of using a process manager.
      Systemd

      Systemd is a Linux system and service manager. Most major Linux distributions have adopted systemd as their default init system.

      A systemd service configuration file is called a unit file, with a filename ending in .service. Here’s an example unit file to manage a Node app directly. Replace the values enclosed in <angle brackets> for your system and app:

      [Unit]
      Description=<Awesome Express App>
      
      [Service]
      Type=simple
      ExecStart=/usr/local/bin/node </projects/myapp/index.js>
      WorkingDirectory=</projects/myapp>
      
      User=nobody
      Group=nogroup
      
      # Environment variables:
      Environment=NODE_ENV=production
      
      # Allow many incoming connections
      LimitNOFILE=infinity
      
      # Allow core dumps for debugging
      LimitCORE=infinity
      
      StandardInput=null
      StandardOutput=syslog
      StandardError=syslog
      Restart=always
      
      [Install]
      WantedBy=multi-user.target
      

      For more information on systemd, see the systemd reference (man page).

      StrongLoop PM as a systemd service

      You can easily install StrongLoop Process Manager as a systemd service. After you do, when the server restarts, it will automatically restart StrongLoop PM, which will then restart all the apps it is managing.

      To install StrongLoop PM as a systemd service:

      $ sudo sl-pm-install --systemd
      

      Then start the service with:

      $ sudo /usr/bin/systemctl start strong-pm
      

      For more information, see Setting up a production host (StrongLoop documentation).

      Upstart

      Upstart is a system tool available on many Linux distributions for starting tasks and services during system startup, stopping them during shutdown, and supervising them. You can configure your Express app or process manager as a service and then Upstart will automatically restart it when it crashes.

      An Upstart service is defined in a job configuration file (also called a “job”) with filename ending in .conf. The following example shows how to create a job called “myapp” for an app named “myapp” with the main file located at /projects/myapp/index.js.

      Create a file named myapp.conf at /etc/init/ with the following content (replace the bold text with values for your system and app):

      # When to start the process
      start on runlevel [2345]
      
      # When to stop the process
      stop on runlevel [016]
      
      # Increase file descriptor limit to be able to handle more requests
      limit nofile 50000 50000
      
      # Use production mode
      env NODE_ENV=production
      
      # Run as www-data
      setuid www-data
      setgid www-data
      
      # Run from inside the app dir
      chdir /projects/myapp
      
      # The process to start
      exec /usr/local/bin/node /projects/myapp/index.js
      
      # Restart the process if it is down
      respawn
      
      # Limit restart attempt to 10 times within 10 seconds
      respawn limit 10 10
      

      NOTE: This script requires Upstart 1.4 or newer, supported on Ubuntu 12.04-14.10.

      Since the job is configured to run when the system starts, your app will be started along with the operating system, and automatically restarted if the app crashes or the system goes down.

      Apart from automatically restarting the app, Upstart enables you to use these commands:

      • start myapp – Start the app
      • restart myapp – Restart the app
      • stop myapp – Stop the app.

      For more information on Upstart, see Upstart Intro, Cookbook and Best Practises.

      StrongLoop PM as an Upstart service

      You can easily install StrongLoop Process Manager as an Upstart service. After you do, when the server restarts, it will automatically restart StrongLoop PM, which will then restart all the apps it is managing.

      To install StrongLoop PM as an Upstart 1.4 service:

      $ sudo sl-pm-install
      

      Then run the service with:

      $ sudo /sbin/initctl start strong-pm
      

      NOTE: On systems that don’t support Upstart 1.4, the commands are slightly different. See Setting up a production host (StrongLoop documentation) for more information.

      Run your app in a cluster

      In a multi-core system, you can increase the performance of a Node app by many times by launching a cluster of processes. A cluster runs multiple instances of the app, ideally one instance on each CPU core, thereby distributing the load and tasks among the instances.

       

      IMPORTANT: Since the app instances run as separate processes, they do not share the same memory space. That is, objects are local to each instance of the app. Therefore, you cannot maintain state in the application code. However, you can use an in-memory datastore like Redis to store session-related data and state. This caveat applies to essentially all forms of horizontal scaling, whether clustering with multiple processes or multiple physical servers.

      In clustered apps, worker processes can crash individually without affecting the rest of the processes. Apart from performance advantages, failure isolation is another reason to run a cluster of app processes. Whenever a worker process crashes, always make sure to log the event and spawn a new process using cluster.fork().

      Using Node’s cluster module

      Clustering is made possible with Node’s cluster module. This enables a master process to spawn worker processes and distribute incoming connections among the workers. However, rather than using this module directly, it’s far better to use one of the many tools out there that does it for you automatically; for example node-pm or cluster-service.

      Using StrongLoop PM

      If you deploy your application to StrongLoop Process Manager (PM), then you can take advantage of clustering without modifying your application code.

      When StrongLoop Process Manager (PM) runs an application, it automatically runs it in a cluster with a number of workers equal to the number of CPU cores on the system. You can manually change the number of worker processes in the cluster using the slc command line tool without stopping the app.

      For example, assuming you’ve deployed your app to and StrongLoop PM is listening on port 8701 (the default), then to set the cluster size to eight using slc:

      $ slc ctl -C http://:8701 set-size my-app 8
      

      For more information on clustering with StrongLoop PM, see Clustering in StrongLoop documentation.

      Using PM2

      If you deploy your application with PM2, then you can take advantage of clustering without modifying your application code. You should ensure your application is stateless first, meaning no local data is stored in the process (such as sessions, websocket connections and the like).

      When running an application with PM2, you can enable cluster mode to run it in a cluster with a number of instances of your choosing, such as the matching the number of available CPUs on the machine. You can manually change the number of processes in the cluster using the pm2 command line tool without stopping the app.

      To enable cluster mode, start your application like so:

      # Start 4 worker processes
      $ pm2 start app.js -i 4
      # Auto-detect number of available CPUs and start that many worker processes
      $ pm2 start app.js -i max
      

      This can also be configured within a PM2 process file (ecosystem.config.js or similar) by setting exec_mode to cluster and instances to the number of workers to start.

      Once running, a given application with the name app can be scaled like so:

      # Add 3 more workers
      $ pm2 scale app +3
      # Scale to a specific number of workers
      $ pm2 scale app 2
      

      For more information on clustering with PM2, see Cluster Mode in the PM2 documentation.

      Cache request results

      Another strategy to improve the performance in production is to cache the result of requests, so that your app does not repeat the operation to serve the same request repeatedly.

      Use a caching server like Varnish or Nginx (see also Nginx Caching) to greatly improve the speed and performance of your app.

      Use a load balancer

      No matter how optimized an app is, a single instance can handle only a limited amount of load and traffic. One way to scale an app is to run multiple instances of it and distribute the traffic via a load balancer. Setting up a load balancer can improve your app’s performance and speed, and enable it to scale more than is possible with a single instance.

      A load balancer is usually a reverse proxy that orchestrates traffic to and from multiple application instances and servers. You can easily set up a load balancer for your app by using Nginx or HAProxy.

      With load balancing, you might have to ensure that requests that are associated with a particular session ID connect to the process that originated them. This is known as session affinity, or sticky sessions, and may be addressed by the suggestion above to use a data store such as Redis for session data (depending on your application). For a discussion, see Using multiple nodes.

      Use a reverse proxy

      A reverse proxy sits in front of a web app and performs supporting operations on the requests, apart from directing requests to the app. It can handle error pages, compression, caching, serving files, and load balancing among other things.

      Handing over tasks that do not require knowledge of application state to a reverse proxy frees up Express to perform specialized application tasks. For this reason, it is recommended to run Express behind a reverse proxy like Nginx or HAProxy in production.

      版权声明:本文内容来自第三方投稿或授权转载,原文地址:https://blog.51cto.com/rongfengliang/3121427,作者:rongfengliang,版权归原作者所有。本网站转在其作品的目的在于传递更多信息,不拥有版权,亦不承担相应法律责任。如因作品内容、版权等问题需要同本网站联系,请发邮件至ctyunbbs@chinatelecom.cn沟通。

      上一篇:rbenv mac&&linux 安装简单说明

      下一篇:fusionauth 通用sso 解决方案学习一 环境运行

      相关文章

      2025-05-14 10:03:13

      一步一步在linux上部署Oracle 11g R2 RAC 【1】

      一步一步在linux上部署Oracle 11g R2 RAC 【1】

      2025-05-14 10:03:13
      app , oracle , 主机名 , 磁盘
      2025-05-13 09:50:48

      更改备库自动生成数据文件路径

      更改备库自动生成数据文件路径

      2025-05-13 09:50:48
      app , oracle
      2025-05-13 09:50:38

      oracle 19.14升级至19.15

      oracle 19.14升级至19.15

      2025-05-13 09:50:38
      app , grid , oracle
      2025-05-13 09:50:38

      添加控制文件—场景(DG备库)

      添加控制文件—场景(DG备库)

      2025-05-13 09:50:38
      app , oracle , recovery
      2025-05-13 09:50:28

      rhel 7.6静默安装oracle 11.2.0.4

      rhel 7.6静默安装oracle 11.2.0.4

      2025-05-13 09:50:28
      app , db , install , oracle
      2025-04-15 09:20:22

      初学Java,面向接口编程,命令模式(十八)

      初学Java,面向接口编程,命令模式(十八)

      2025-04-15 09:20:22
      命令 , 客户端 , 执行 , 模式 , 请求
      2025-04-01 10:28:37

      laravel5.5简单聊聊\\Illuminate\\Foundation\\Bootstrap\\RegisterProviders::class做了什么

      laravel5.5简单聊聊\\Illuminate\\Foundation\\Bootstrap\\RegisterProviders::class做了什么

      2025-04-01 10:28:37
      app , php , provider , 加载
      2025-04-01 10:28:37

      lumen5.5 鉴权dusterio/lumen-passport

      lumen5.5 鉴权dusterio/lumen-passport

      2025-04-01 10:28:37
      app , class , token
      2025-03-31 08:56:45

      laravel5.5 自定义global helper function && 背后的加载机制

      laravel5.5 自定义global helper function && 背后的加载机制

      2025-03-31 08:56:45
      app , composer , php
      2025-03-17 08:48:47

      Android 使用电脑查看手机应用数据库内容:Debug-Database

      Android 使用电脑查看手机应用数据库内容:Debug-Database

      2025-03-17 08:48:47
      app , Debug , 数据库 , 查看
      查看更多
      推荐标签

      作者介绍

      天翼云小翼
      天翼云用户

      文章

      33561

      阅读量

      5244598

      查看更多

      最新文章

      lumen5.5 鉴权dusterio/lumen-passport

      2025-04-01 10:28:37

      laravel5.5简单聊聊\\Illuminate\\Foundation\\Bootstrap\\RegisterProviders::class做了什么

      2025-04-01 10:28:37

      laravel5.5 自定义global helper function && 背后的加载机制

      2025-03-31 08:56:45

      Prometheus监控之process-exporter

      2024-10-30 09:39:20

      webview[detail] does not exist【BUG解决】【MUI+VUE】

      2024-06-12 09:30:16

      查看更多

      热门文章

      webview[detail] does not exist【BUG解决】【MUI+VUE】

      2024-06-12 09:30:16

      Prometheus监控之process-exporter

      2024-10-30 09:39:20

      laravel5.5 自定义global helper function && 背后的加载机制

      2025-03-31 08:56:45

      laravel5.5简单聊聊\\Illuminate\\Foundation\\Bootstrap\\RegisterProviders::class做了什么

      2025-04-01 10:28:37

      lumen5.5 鉴权dusterio/lumen-passport

      2025-04-01 10:28:37

      查看更多

      热门标签

      linux java python javascript 数组 前端 docker Linux vue 函数 shell git 节点 容器 示例
      查看更多

      相关产品

      弹性云主机

      随时自助获取、弹性伸缩的云服务器资源

      天翼云电脑(公众版)

      便捷、安全、高效的云电脑服务

      对象存储

      高品质、低成本的云上存储服务

      云硬盘

      为云上计算资源提供持久性块存储

      查看更多

      随机文章

      laravel5.5简单聊聊\\Illuminate\\Foundation\\Bootstrap\\RegisterProviders::class做了什么

      laravel5.5 自定义global helper function && 背后的加载机制

      Prometheus监控之process-exporter

      lumen5.5 鉴权dusterio/lumen-passport

      webview[detail] does not exist【BUG解决】【MUI+VUE】

      • 7*24小时售后
      • 无忧退款
      • 免费备案
      • 专家服务
      售前咨询热线
      400-810-9889转1
      关注天翼云
      • 旗舰店
      • 天翼云APP
      • 天翼云微信公众号
      服务与支持
      • 备案中心
      • 售前咨询
      • 智能客服
      • 自助服务
      • 工单管理
      • 客户公告
      • 涉诈举报
      账户管理
      • 管理中心
      • 订单管理
      • 余额管理
      • 发票管理
      • 充值汇款
      • 续费管理
      快速入口
      • 天翼云旗舰店
      • 文档中心
      • 最新活动
      • 免费试用
      • 信任中心
      • 天翼云学堂
      云网生态
      • 甄选商城
      • 渠道合作
      • 云市场合作
      了解天翼云
      • 关于天翼云
      • 天翼云APP
      • 服务案例
      • 新闻资讯
      • 联系我们
      热门产品
      • 云电脑
      • 弹性云主机
      • 云电脑政企版
      • 天翼云手机
      • 云数据库
      • 对象存储
      • 云硬盘
      • Web应用防火墙
      • 服务器安全卫士
      • CDN加速
      热门推荐
      • 云服务备份
      • 边缘安全加速平台
      • 全站加速
      • 安全加速
      • 云服务器
      • 云主机
      • 智能边缘云
      • 应用编排服务
      • 微服务引擎
      • 共享流量包
      更多推荐
      • web应用防火墙
      • 密钥管理
      • 等保咨询
      • 安全专区
      • 应用运维管理
      • 云日志服务
      • 文档数据库服务
      • 云搜索服务
      • 数据湖探索
      • 数据仓库服务
      友情链接
      • 中国电信集团
      • 189邮箱
      • 天翼企业云盘
      • 天翼云盘
      ©2025 天翼云科技有限公司版权所有 增值电信业务经营许可证A2.B1.B2-20090001
      公司地址:北京市东城区青龙胡同甲1号、3号2幢2层205-32室
      • 用户协议
      • 隐私政策
      • 个人信息保护
      • 法律声明
      备案 京公网安备11010802043424号 京ICP备 2021034386号