现在的位置: 首页 > 综合 > 正文

NIO系列4:TCP服务数据读写

2019年06月15日 ⁄ 综合 ⁄ 共 4938字 ⁄ 字号 评论关闭

注:本文适合对象需对java NIO API的使用及异步事件模型(Reactor模式)有一定程度的了解,主要讲述使用java原生NIO实现一个TCP服务的过程及细节设计。

上文讲到当客户端完成与服务端的连接建立后,为其SocketChannel封装了一个session对象代表这个连接,并交给processor处理。

processor的内部有3个重要的队列,分别存放新创建的session、需要写数据的session和准备关闭的session,如下:

    /** A Session queue containing the newly created sessions */
    private final Queue<AbstractSession> newSessions = new ConcurrentLinkedQueue<AbstractSession>();
	
    /** A queue used to store the sessions to be flushed */
    private final Queue<AbstractSession> flushingSessions = new ConcurrentLinkedQueue<AbstractSession>();
    
    /** A queue used to store the sessions to be closed */
    private final Queue<AbstractSession> closingSessions = new ConcurrentLinkedQueue<AbstractSession>();

在processor的reactor循环处理线程中,每轮循环的处理包括如下步骤:

1. selector.select(),其中为了处理连接超时的情况,select方法中传递了超时参数以免其永久阻塞,通常是1秒。该方法即时在没有事件发生时每秒返回一次,进入循环检测超时

int selected = selector.select(SELECT_TIMEOUT);

2. 从select返回后,首先检查newSessions队列是否有新的session加入,并为其注册监听事件(读事件监听)。session只有在注册完事件后,我们才认为其状态为open并派发打开事件。(关于session状态,有创建、打开、关闭中、已关闭几种)

  for (AbstractSession session = newSessions.poll(); session != null; session = newSessions.poll()) {
			SelectableChannel sc = session.getChannel();
			SelectionKey key = sc.register(selector, SelectionKey.OP_READ, session);
			session.setSelectionKey(key);
			
			// set session state open, so we can read / write
			session.setOpened();
			
			// fire session opened event
			eventDispatcher.dispatch(new Event(EventType.SESSION_OPENED, session, null, handler));
			
			n++;
		}

3. 有读/写事件时,进行相关处理,每次读写事件发生时更新一次最后的IO时间。

  // set last IO time
		session.setLastIoTime(System.currentTimeMillis());
		
		// Process reads
		if (session.isOpened() && isReadable(session)) {
			read(session);
		}

		// Process writes
		if (session.isOpened() && isWritable(session)) {
			asyWrite(session);
		}

读取数据时有一个小技巧在于灵活自适应buffer分配(来自mina的一个实现策略),每次判断读取到的字节数若乘以2依然小于buffer大小,则收缩buffer为原来一半,若读取的字节数已装满buffer则扩大一倍。

		int readBytes = 0;
		int ret;
		while ((ret = ((SocketChannel) session.getChannel()).read(buf)) > 0) {
			readBytes += ret;
			if (!buf.hasRemaining()) {
				break;
			}
		}

		if (readBytes > 0) {
			if ((readBytes << 1) < session.getReadBufferSize()) {
				shrinkReadBufferSize(session);
			} else if (readBytes == session.getReadBufferSize()) {
				extendReadBufferSize(session);
			}
			
			fireMessageReceived(session, buf, readBytes);
		}

		// read end-of-stream, remote peer may close channel so close session.
		if (ret < 0) {
			asyClose(session);
		}

处理写操作其实是异步的,总是放入flushSessions中等待写出。

  private void asyWrite(AbstractSession session) {
		// Add session to flushing queue, soon after it will be flushed in the same select loop.
		flushingSessions.add(session);
	}

4. 若有需要写数据的session,则进行flush操作。

写事件一般默认都是不去关注的,因为在TCP缓冲区可写或远端断开或IO错误发生时都会触发该事件,容易诱发服务端忙循环从而CPU100%问题。为了保证读写公平,写buffer的大小设置为读buffer的1.5倍(来自mina的实现策略),每次写数据前设置为对写事件不再感兴趣。限制每次写出数据大小的原因除了避免读写不公平,也避免某些连接有大量数据需要写出时一次占用了过多的网络带宽而其他连接的数据写出被延迟从而影响了公平性。

  // First set not be interested to write event
	setInterestedInWrite(session, false);

首先向TCP缓冲区写出数据(NIO的原生API操作都是不阻塞的)

int qota = maxWrittenBytes - writtenBytes;
int localWrittenBytes = write(session, buf, qota);

写完后根据返回的写出数据字节数,可能存在以下多种情况:

- - buffer一次写完,则派发消息已经发送事件

  // The buffer is all flushed, remove it from write queue
			if (!buf.hasRemaining()) {
				if (LOG.isDebugEnabled()) {
					LOG.debug("The buffer is all flushed, remove it from write queue");
				}
				
				writeQueue.remove();
				
				// fire message sent event
				eventDispatcher.dispatch(new Event(EventType.MESSAGE_SENT, session, buf.array(), handler));
			}

- - 若返回的写入字节数为0,可能是TCP缓存buffer已满,则注册对写事件感兴趣,稍待下次再写。

			// 0 byte be written, maybe kernel buffer is full so we re-interest in writing and later flush it.
			if (localWrittenBytes == 0) {
				if (LOG.isDebugEnabled()) {
					LOG.debug("0 byte be written, maybe kernel buffer is full so we re-interest in writing and later flush it");
				}
				
				setInterestedInWrite(session, true);
				flushingSessions.add(session);
				return;
			}

- - 若一次写入没有写完buffer中的数据,依然注册对写事件感兴趣,稍待下次再写。

  // The buffer isn't empty(bytes to flush more than max bytes), we re-interest in writing and later flush it.
			if (localWrittenBytes > 0 && buf.hasRemaining()) {
				if (LOG.isDebugEnabled()) {
					LOG.debug("The buffer isn't empty(bytes to flush more than max bytes), we re-interest in writing and later flush it");
				}
				
				setInterestedInWrite(session, true);
				flushingSessions.add(session);
				return;
			}

- - 一次写入数据太多时,为了保证公平性,依然下次再写入

			// Wrote too much, so we re-interest in writing and later flush other bytes.
			if (writtenBytes >= maxWrittenBytes && buf.hasRemaining()) {
				if (LOG.isDebugEnabled()) {
					LOG.debug("Wrote too much, so we re-interest in writing and later flush other bytes");
				}
				
				setInterestedInWrite(session, true);
				flushingSessions.add(session);
				return;
			}

5. 有需要关闭的session,则进行关闭操作。引发关闭session的操作可能来自应用方主动关闭,也可能是由于IO异常后自动关闭。由于关闭session可能存在多线程调用,为了避免锁同步,我们通过状态检测来规避用锁机制提高效率。

关闭session的操作具体来说就是对channel.close()和key.cancel(),这2个操作后其实还没有完全释放socket占用的文件描述符,需等到下次select()操作后,一些NIO框架会主动调用,由于我们这里select(TIMEOUT)带有超时参数会自动唤醒,因此不存在这个问题。

	private int close() throws IOException {
		int n = 0;
		for (AbstractSession session = closingSessions.poll(); session != null; session = closingSessions.poll()) {
			if (LOG.isDebugEnabled()) { LOG.debug("Closing session: " + session); }
			
			if (session.isClosed()) {
				if (LOG.isDebugEnabled()) { LOG.debug("Escape close session, it has been closed: " + session); }
				continue;
			}
			
			session.setClosing();
			
			close(session);
			n++;
			
			session.setClosed();
			
			// fire session closed event
			eventDispatcher.dispatch(new Event(EventType.SESSION_CLOSED, session, null, handler));
			
			if (LOG.isDebugEnabled()) { LOG.debug("Closed session: " + session); }
		}
		return n;
	}

抱歉!评论已关闭.